A Variable Neighborhood Search Approach for the Dynamic Single Row Facility Layout Problem

: The dynamic single row facility layout problem (DSRFLP) is deﬁned as the problem of arranging facilities along a straight line during a multi-period planning horizon with the objective of minimizing the sum of the material handling and rearrangement costs. The material handling cost is the sum of the products of the ﬂow costs and center-to-center distances between facilities. In this paper, we focus on metaheuristic algorithms for this problem. The main contributions of the paper are three-fold. First, a variable neighborhood search (VNS) algorithm for the DSRFLP is proposed. The main version of VNS uses an innovative strategy to start the search from a solution obtained by constructing an instance of the single row facility layout problem (SRFLP) from a given instance of the DSRFLP and applying a heuristic algorithm for the former problem. Second, a fast local search (LS) procedure is developed. The innovations of this procedure are two-fold: (i) the fast insertion and swap neighborhood exploration techniques are adapted for the case of the dynamic version of the SRFLP; and (ii) to reduce the computational time, the swap operation is restricted on pairs of facilities of equal lengths. Provided the number of planning periods is a constant, the neighborhood exploration procedures for n facilities have only O ( n 2 ) time complexity. The superiority of these procedures over traditional LS techniques is also shown by performing numerical tests. Third, computational experiments on DSRFLP instances with up to 200 facilities and three or ﬁve planning periods are carried out to validate the effectiveness of the VNS approach. The proposed VNS heuristic is compared with the simulated annealing (SA) method which is the state of the art algorithm for the DSRFLP. Experiments show that VNS outperforms SA by a signiﬁcant margin.


Introduction
The facility layout problems are concerned with finding an efficient arrangement of physical facilities (e.g., machines or manufacturing cells) on a planar site. An important member of this class of problems is the single row facility layout problem (SRFLP). It asks to arrange the facilities along a straight line so as to minimize the sum of the products of the flow costs and center-to-center distances between facilities [1]. The objective function of the SRFLP represents the material handling cost that is a good measure to evaluate the efficiency of a layout. A feasible solution is a permutation of facilities. It can be noted that the SRFLP is a static problem, because the material flows between facilities are assumed to be constant. This assumption, however, may not be always valid in practice. In a dynamic layout, the flows of material between facilities can change during a planning horizon. There are many factors that play a role in this, such as the change in the design of existing products, removing an existing product or adding a new product to the product line, varying product demand, shortening life cycle of products, the change in the sequence of operations, and the introduction of new safety standards. Because of changes in the layout environment, a multi-period planning horizon is considered. The material flows between facilities can change from one planning period to the next. When minimizing the material handling cost, it may happen that the permutation of facilities in period t, t > 1, is different from the permutation of facilities in period t − 1. In such a case, one or more facilities are moved (shifted) to new locations at the beginning of period t. The objective function of the dynamic facility layout problem consists of two parts: the material handling cost, and the rearrangement costs of facilities.
A dynamic version of the SRFLP, called the dynamic single-row facility layout problem (DSRFLP) was introduced byŞahin et al. [2]. In their problem formulation, the rearrangement cost for a facility occurs when the center coordinate of this facility in one planning period is different from that in the preceding or succeeding periods. Figure 1 shows a layout plan for a DSRFLP instance with seven facilities and three planning periods. Costs are incurred for the relocation of facilities 3, 4, and 5 at the beginning of period 2 and for the relocation of facilities 2 and 4 at the beginning of period 3. Notice that facilities 2 and 4 are of equal size and therefore, facility 5 is not moved in period 3. Let us denote the set of facilities by H, their number by n, and the number of planning periods by m. A feasible solution of the DSRFLP is an ordered collection p = {p 1 , . . . , p m } of m n−element permutations p t = (p t (1), . . . , p t (n)), t = 1, . . . , m, where p t (i) ∈ H, i ∈ {1, . . . , n}, is the facility in the ith position during period t. We denote the set of all feasible solutions of the DSRFLP by Π m . Let L s , s ∈ {1, . . . , n}, stand for the length of facility s. For convenience, the main notations used in this paper are presented in Table 1. Mathematically, the DSRFLP can be expressed as follows: where d t (p t (i), p t (j)) = L p t (i) /2 + ∑ i<k<j L p t (k) + L p t (j) /2, φ tp t (i)p t (j) represents the material flow between facilities p t (i) and p t (j) during period t, ψ p t (i)p t (j) is the cost of transferring a unit of material per distance unit between facilities p t (i) and p t (j), r ts is the rearrangement cost of facility s at the beginning of period t, and I(p t−1 , p t ) is the set of facilities whose location during period t differs from that during period t − 1 (in Figure 1, I(p 1 , p 2 ) = {3, 4, 5} and I(p 2 , p 3 ) = {2, 4}). Equation (2) determines the distance between facilities, and Equation (3) gives the cost of material flow per distance unit between facilities. Material flow between facilities s and u in period t ψ su Cost of transferring a unit of material per distance unit between facilities s and u w tsu Material flow cost between facilities s and u in period t x ts Center coordinate of facility s in period t d t (s, u) Distance between centers of facilities s and u in period t (d t (s, u) = |x ts − x tu |) r ts Rearrangement cost of facility s at the beginning of period t I(p t−1 , p t ) Set of facilities whose location during period t differs from that during period t − 1 w su Material flow cost between facilities s and u in the SRFLP instance λ + Objective function of the DSRFLP F * Objective function value of the best solution p * Best solution F Average objective function value F avdev Average deviation of the objective function from the reference value c tq Sum of flow costs between the first q facilities and the remaining n − q facilities during period t G(u, t, x) Change in rearrangement cost incurred by placing facility u at location x during period t x ts (l) Center coordinate of facility s when it is inserted at position l in permutation p t N z (p) Interchange neighborhood of depth z of solution p N(p) Insertion neighborhood of solution p T lim Maximum time limit for algorithm execution
In the literature, there are several other facility layout problems that are akin to the SRFLP. One of them is the constrained SRFLP in which some facilities need to be placed in certain locations or in specified orders. A permutation-based genetic algorithm [28] and a fireworks algorithm [29] were proposed for solving this problem. Keller [30] developed three construction heuristics for the SRFLP with machine-spanning clearances. Amaral [31] and Yang et al. [32] proposed mixed-integer programming models for the parallel row ordering problem. Several algorithms were presented to solve the corridor allocation problem, such as simulated annealing and tabu search [33], a permutation-based genetic algorithm hybridized with a local search technique [34], and a scatter search algorithm [35]. Recently, attention was attracted to the space-free multi-row facility layout problem. An exact method for this problem was presented in [36]. An efficient VNS algorithm for the same problem was developed in [37].
A mixed-integer linear programming model and solution approaches for the DSRFLP were proposed byŞahin et al. [2]. They used CPLEX to solve the linear model. However, the solver failed to produce a provably optimal solution for instances of size greater than 10. To find high quality solutions for larger DSRFLP instances,Şahin et al. [2] proposed two metaheuristic-based approaches: a genetic algorithm (GA) and a simulated annealing (SA) technique. Their GA is strengthened by integrating a restart strategy and applying the concept of acceptance probability. The proposed SA algorithm is also enhanced with a restart strategy. The authors reported computational results on 20 problem instances with up to 100 facilities and 3 or 5 planning periods. The empirical results demonstrated the superiority of the SA algorithm over the GA implementation.
There is a vast amount of literature related to the dynamic version of the facility layout problems. Gong et al. [38] proposed a hybrid algorithm of harmony search for the dynamic parallel row ordering problem. Their algorithm combines a harmony search technique with a tabu search heuristic. The authors presented the results of computational experiments on problem instances with up to 70 facilities. Guan et al. [39] proposed a hybrid evolution algorithm for solving a dynamic extended row facility layout problem. Both combinatorial and continuous aspects of the problem were taken into account. However, historically, most algorithms in the field were developed for the dynamic facility layout problem (DFLP) in which the layout of each timeframe is modeled as a quadratic assignment problem. The first such algorithms (dynamic programming-based optimal and heuristic procedures) were proposed by Rosenblatt [40]. Later, many metaheuristic-based algorithms for the DFLP were reported. Balakrishnan and Cheng [41] developed a genetic algorithm for solving this problem. A hybrid GA for the DFLP was proposed in [42]. A simulated annealing heuristic for this problem was presented in [43]. Ant colony optimization algorithms for the DFLP were developed in [44,45]. Hybrid ant system heuristics were proposed in [46].Şahin and Türkbey [47] presented an algorithm-hybridizing SA with tabu search. Three new tabu search heuristics for the DFLP were developed by McKendall and Liu [48]. Hosseini-Nasab and Emami [49] designed a hybrid particle swarm optimization algorithm to solve the DFLP. Turanoglu and Akkaya [50] proposed a hybrid algorithm combining SA and a bacterial foraging optimization technique. The results of the experimental comparison of a number of different algorithms from the literature were provided in a recent study on the DFLP by Zouein and Kattan [45]. A review of the recent advances on the DFLP can be found in [51]. The extensive literature on both static and dynamic versions of the facility layout problems was reviewed in [52].

Our Contribution
The analysis of literature shows that there has been considerable interest in developing algorithms for the dynamic facility layout problems. One of such problems is the DSRFLP. However, research on the DSRFLP is in its early stages. Considering this observation, our motivation is to develop a reasonably fast heuristic algorithm that could perform well on large DSRFLP instances. Our algorithm is based on the VNS metaheuristic. One purpose of this paper is to design a fast local search (LS) procedure. It takes only O(1) time to compute the gain of a swap or insertion operation. The procedure plays a central role within the VNS framework. Our specific goal is to compare the performance of the VNS technique with that of the SA algorithm proposed in [2].
The main contributions of this paper are summarized as follows: • A variable neighborhood search algorithm to solve the DSRFLP. It is one of the first heuristic approaches proposed to deal with this new problem. The approach uses an innovative strategy to start the search from a solution obtained by constructing an instance of the SRFLP from a given instance of the DSRFLP and applying a heuristic algorithm for the former problem. A simpler version of VNS uses a random permutation of facilities as a starting solution.  The rest of the paper is organized as follows. In the next section, we present our VNS approach. In Section 3, we give a detailed description of our LS procedure. Section 4 is devoted to the experimental analysis and comparisons of algorithms. Section 5 provides an empirical analysis of LS variants. Concluding remarks are given in Section 6.

Variable Neighborhood Search
The variable neighborhood search method is a metaheuristic that has been shown to be very successful in solving many combinatorial optimization problems. The basic idea of VNS is the systematic change of a neighborhood combined with solution perturbation and local search procedures. During algorithm execution, the neighborhood of a solution is explored using a set of predefined neighborhood structures. Since its introduction in 1997 [53], VNS has undergone various modifications and enhancements. A discussion of the basic concepts and successful applications of VNS can be found in survey papers [54].
Before presenting our algorithm, we need to define neighborhood structures used in the search process. Let p = (p 1 , . . . , p m ) ∈ Π m represent a solution of the DSRFLP. For z ∈ {1, . . . , z max }, we denote byÑ z (p) the set of all solutions that can be obtained from p by performing z pairwise interchanges of facilities subject to the condition that each facility changes its position in each permutation p t t ∈ {1, . . . , m} at most once. The neighborhood structures {Ñ z (p) | p ∈ Π m }, z = 1, . . . , z max , are appropriate in cases where a permutation-based combinatorial optimization problem is solved [55][56][57]. We use these structures in the shaking (solution perturbation) step of the algorithm.
The steps of our implementation of the VNS method are given in Algorithm 1. The algorithm starts by generating an initial solution, either randomly or by a heuristic procedure. This step will be discussed later in this section. The initial solution is improved by applying a local search procedure LS (Line 2). We defer the description of LS to the next section. The main part of VNS is the "while" loop (Lines 4-14) which executes until the time condition is satisfied. The search is terminated when the maximum time limit, T lim , is reached. The algorithm has three parameters that control the search process. The parameter z min determines the size of the neighborhood the search begins from. The largest possible size of the neighborhood is defined by z max = ρn, where ρ is another parameter of the algorithm. The variable z step is used to move from the current neighborhood to the next one. We set z step = max( z max /θ , 1), where the scaling factor θ > 0 is the third parameter of VNS. The best solution to the algorithm is denoted as p * and its value is f * . The inner "while" loop of the algorithm iterates over the following three steps: the perturbation of the best solution p * (procedure shake in Line 7), local search (Line 8), and the selection of the size of the next neighborhood (procedure neighborhood_change in Line 9). These steps are executed until z becomes greater than z max . At the end of each iteration, the termination condition of VNS is checked (Line 10).
Algorithm 1 Variable neighborhood search VNS(z min , z max , z step , T lim ) Input: Instance of the DSRFLP, parameters z min , z max , z step , and T lim . Output: Best found solution p * and its value f * . // z min , z max , and z step control the size of the neighborhood // T lim is the time limit for VNS 1: Generate an initial solution p ∈ Π m 2: f := LS(p) 3: Assign p to p * and f to f * 4: while time limit T lim not reached do 5: z := z min 6: while z z max do 7: p := shake(p * , z) 8: f := LS(p) 9: if elapsed time is more than T lim then 11: Break from the loop 12: end if 13: end while 14: end while 15: Stop with the solution p * of value f * The pseudo-code of the shaking procedure and the neighborhood change function is given in Algorithms 2 and 3, respectively. The aim of shake is to perturb the best solution p * (or, more precisely, its copy p). The parameter z is the total number of pairwise interchange moves that are executed by the procedure. The number of moves performed on the permutation p t , t ∈ {1, . . . , m}, is denoted by q t . The procedure uniformly and randomly chooses an integer in the interval [1, m] z times. The value of q t is equal to the number of times that the integer t is selected. Further steps of the procedure (Lines 3-10) are executed for each planning period t with q t > 0. The inner loop (Lines 5-9) starts by randomly selecting two facilities from the set {p t (i) | i ∈ I}. They are denoted as p t (k) and p t (l). Then, the permutation p t is updated by swapping the positions of the selected facilities. The role of the set I in the algorithm is to guarantee that each facility is selected at most once. The solution returned by shake belongs to the neighborhoodÑ z (p * ). The procedure neighborhood_change either increases the shaking parameter z by z step or sets it to the minimum value z min if an improved solution has been found. The procedure is responsible for memorizing this solution (Line 2).

Algorithm 2 Shake function
shake(p * , z) Input: Best-so-far solution p * , parameter z. Output: Solution p in the neighborhood of p * . 1: Assign p * to p 2: Randomly split z into m nonnegative numbers q t , t = 1, . . . , m 3: for t = 1, . . . , m such that q t > 0 do 4: for q t times do 6: Randomly select k and l = k from I 7: Swap positions of facilities p t (k) and p t (l) in p t 8: I := I \ {k, l} 9: end for 10: end for 11: return p = (p 1 , . . . , p m ) parameters z min and z step . Output: Possibly updated p * and f * , parameter z.
Assign p to p * and f to f * 3: z := z min 4: else 5: z := z + z step 6: end if 7: return z Now, we return to the question of generating an initial solution to the DSRFLP. A simple way is to randomly generate a permutation of n facilities and assign this permutation to p t for each t ∈ {1, . . . , m}. We call this configuration of our algorithm VNS1. An alternative strategy is to apply a heuristic technique. Our choice in this work is to use a VNS algorithm proposed in [16] for solving the SRFLP. We obtain an instance of the SRFLP with the flow matrix W = (w su ) from an instance of the DSRFLP with a set of flow matrices W t = (w tsu ), t = 1, . . . , m, straightforwardly by summing the matrices W t . Formally, w su = ∑ m t=1 w tsu , s, u = 1, . . . , n. Like VNS presented in this paper, the algorithm in [16] is equipped with a CPU time-based stopping criterion. Therefore, we have to share the time resources between the generation of an initial solution (Line 1 of Algorithm 1) and the remaining steps of the VNS. We set the cutoff time for the first step (Line 1) to βT lim . A suitable value of the parameter β should be chosen experimentally. Intuitively, β is expected to be less than 0.1. An experiment for selecting β is described in Section 4. Assume thatp is the solution produced by the algorithm in [16] for the SRFLP instance with the flow matrix W . We constructed an initial solution for the DSRFLP by setting p t =p for each t = 1, . . . , m. We refer to this configuration of VNS as VNS2. It can be noticed that VNS1 is obtained from VNS2 by taking β = 0.

Local Search
An important issue in the design of local search algorithms is the choice of a neighborhood structure or structures. Our first priority is to use the insertion neighborhood. Let us denote this neighborhood for p = (p 1 , . . . , p m ) ∈ Π m by N(p). It consists of all solutions that can be obtained from p by removing a facility from its current position in a permutation p t , t ∈ {1, . . . , m}, and inserting it at a different position in the same permutation. Given p ∈ Π m , the construction of a solution in the neighborhood N(p) is called an insertion move. As noted by Schiavinotto and Stützle [55], insertion move is one of the basic operators in permutation-based optimization problems. Assume that p = (p 1 , . . . , p m ) ∈ N(p) is obtained from p by removing facility s = p t (k) from position k and inserting it at position l. The change in cost between p and p is called the move gain. We denote it by δ(p, k, l) = F(p ) − F(p). The insertion operation is illustrated in Figure 2 for n = 6 and m = 3. Facility 1 is moved at the beginning of planning period 2 from its current position 3 to position 5. As a result, facilities 2 and 6 are also relocated. To describe methods for computing δ(p, k, l), we need some notations. We denote by X = (x tu ) an m × n matrix whose entry x tu is the center coordinate of facility u during planning period t. We also define the following two functions: We note that x tu , x t−1,u , and x t+1,u are original positions of facility u in periods t, t − 1, and t + 1, respectively, and x is the position of facility u after the movement during period t. Clearly, (4) is defined for t ∈ {2, . . . , m} and (5) is defined for t ∈ {1, . . . , m − 1}. Let us consider the general case of t ∈ {2, . . . , m − 1}. Assume that the center coordinate of facility u changes from x tu to x when u is moved to a new location at the beginning of period t. Then, the sum G(u, t, x) = g 1 (u, t, x) + g 2 (u, t, x) expresses the change in rearrangement cost of p incurred by this move operation. In Figure 2, this sum is r 2,1 − r 3,1 for facility 1, −r 3,2 for facility 2, and −r 3,6 for facility 6. The above-defined sum reduces to a single term G(u, t, x) = g 2 (u, t, x) for t = 1 and G(u, t, x) = g 1 (u, t, x) for t = m. Now we return to the computation of δ(p, k, l). Let δ (p, k, l) denote the change in material flow cost incurred by the insertion move producing solution p from the solution p. For our purposes, it is convenient to write the move gain as where s = p t (k) as before and x ts (l) stands for the center coordinate of facility s in the layout during period t, which is obtained by inserting s at position l in the permutation p t . The reason for using (6) is to compute both terms at the right-hand side of (6) (more precisely, δ (p, k, l) and x ts (l)) recursively.
To proceed, suppose that l < k. We note that the insertion move can be reduced to a sequence of elementary moves in which facility s is interchanged with its left neighbor. The insertion process starts by interchanging s with facility p t (k − 1). Next s is interchanged with p t (k − 2), then with p t (k − 3), etc. Eventually, this process is going to end when facility s reaches position l in the permutation p t . Let us focus on the last step in the described process: facility s is interchanged with facility v = p t (l). At this point, δ (p, k, l + 1) and x ts (l + 1) are assumed to be known. The last step of the insertion process is illustrated in Figure 3. We see that after swapping the positions of facilities s and v, the distance between facility p t (i) = u, i < l, and s decreases by L v and that between u and v increases by L s .
Similarly, the distance between facility p t (i), i > l, i = k, and s increases by L v and that between p t (i) and v decreases by L s . Based on these observations, we can write The last term in (7) represents the change in the cost resulting from the rearrangement of facility v. The new center coordinate of facility s is computed as follows: The initial conditions of the recurrence relations (7) and (8) are δ (p, k, k) = 0 and x ts (k) = x ts , respectively. If k < l, then (7) and (8) are replaced by the following equations: x ts (l) = x ts (l − 1) The initial conditions of (9) and (10) are the same as in the case of l < k. The approach we have just described, however, is not very efficient. It takes O(n) time to compute δ (p, k, l) using (7) or (9). We adopt a method that was proposed in [16] for solving the SRFLP. We use an m × (n − 1) matrix C = (c tq ) with entries c tq = ∑ q i=1 ∑ n j=q+1 w tp t (i)p t (j) , t = 1, . . . , m, q = 1, . . . , n − 1. Its entry c tq represents the sum of flow costs between the first q facilities and the remaining n − q facilities during period t. To compute C, we use two other matrices E 1 = (e 1 tu ) and and where it is assumed that u = p t (q). The rows of the matrices E 1 and E 2 are indexed by periods and the columns by facilities. It is not difficult to see that where by convention c t0 = 0, t = 1, . . . , m. From C, we can build the matrix C − = (c − tq ) with entries c − tq = c tq − c t,q−1 and the matrix C + = (c + tq ) with entries c + tq = c tq + c t,q−1 , where t = 1, . . . , m and q = 1, . . . , n. In these definitions, it is assumed that c tn = 0, t = 1, . . . , m. An efficient way to compute δ (p, k, l) is provided by the following formulas.
Proof. Consider the case of l < k. Suppose that the facility s is interchanged with its current left neighbor v = p t (l). Let the resulting change in material handling cost be denoted by δ 1 (p, k, l) and the cost of rearranging facility v be denoted by δ 2 (p, k, l). Thus, δ (p, k, l) = δ (p, k, l + 1) + δ 1 (p, k, l) + δ 2 (p, k, l). Based on Proposition 6 in [16], it can be concluded that . Moreover, from the definition of the functions g 1 , g 2 , and G it is obvious that δ 2 (p, k, l) = G(v, t, x tv + L s ). Taken together, these two observations prove (12). The same line of reasoning applies to Equation (13).
When facility s is relocated from position k to position l in the permutation p t , the row of the matrix C − corresponding to period t needs to be updated. Suppose first that l < k. Then, for each facility v = p t (q), q ∈ {l, . . . , k − 1}, the material flow cost w tsv is added to e 1 tv and e 2 ts and subtracted from e 2 tv and e 1 ts . If l > k, then, for each facility v = p t (q), q ∈ {k + 1, . . . , l}, w tsv is added to e 2 tv and e 1 ts and subtracted from e 1 tv and e 2 ts . This step of the algorithm also includes updating the tth row of the matrix X and the permutation p t .
Having updated E 1 and E 2 , the new entries in the tth row of C are computed using (11). This is only performed for q = min(k, l), . . . , max(k, l) − 1. Finally, the tth row of the matrix C − is updated according to the definition of C − .
To make the search more productive, our LS algorithm also applies the swap operator. It consists of swapping the positions of two facilities. The obtained solution belongs to the neighborhoodÑ 1 defined in Section 2. To lessen the computational burden, the algorithm employs a reduced swap neighborhoodÑ 1 . For p ∈ Π m , a solution p ∈Ñ 1 (p) belongs tõ N 1 (p) if and only if it is obtained by interchanging in p t , t ∈ {1, . . . , m}, two facilities of equal length. Let these facilities be denoted by s = p t (k) and u = p t (l). Assume w.l.o.g. that k < l. Since L s = L u , the center coordinates of facilities p t (i), i = k + 1, . . . , l − 1, in p are the same as in p. This fact significantly reduces the complexity of computing the difference between F(p ) and F(p). We call this difference the gain of the swap move, and denote it by ∆(p, k, l) = F(p ) − F(p). One possible method to compute ∆(p, k, l) uses a distance matrix. Let us denote this n × n matrix by D t = (d t (p t (i), p t (j))) where d t (p t (i), p t (j)) is the distance between the centers of facilities p t (i) and p t (j) during period t as defined by Equation (2). Assume that the positions of facilities s and u with L s = L u are swapped in the permutation p t . This operation changes the material flow cost between each facility v ∈ H \ {s, u} and facilities s and u. This change is equal to The described method is simple but not very efficient. A good alternative is to adopt an approach proposed in [16]. To present an expression for ∆(p, k, l), we use the following matrices: , v ∈ H, j = 1, . . . , n, t ∈ {1, . . . , m}, with entries for v = p t (q), and B t = (b tvj ), v ∈ H, j = 1, . . . , n, t ∈ {1, . . . , m}, with entries for v = p t (q). Matrices C − and C + were already defined earlier in this section. With these matrices, we can provide a formula to calculate the gain ∆(p, k, l).
The corresponding row of the matrix B t can be constructed by starting with b tvq = 0 and applying the following equations: The proof of (17)- (20) can be found in [16]. Suppose that facilities s = p t (k) and u = p t (l), l > k, are interchanged in the permutation p t . Then, the matrices A t and B t need to be updated. This can be easily done using Equations (17)- (20). For simplicity reasons, we assume that p in these equations refers to the solution obtained after performing the swap move. Let v = p t (q). If q < k, then (17) is used for j = k, . . . , l − 1 and (19) for j = k, . . . , n. If q > l, then (18) is used for j = l, l − 1, . . . , k + 1 and (20) for j = l, l − 1, . . . , 1. If k < q < l, then A t is updated by Equation (17) for j = l, . . . , n and by (18) for j = k, k − 1, . . . , 1. Similarly, B t is updated by Equation (19) for j = l, . . . , n and by (20) for j = k, k − 1, . . . , 1. Finally, if q = k (in this case, v = u) or q = l (in this case, v = s), the vth row of A t and B t is created anew by first setting a tvq = b tvq = 0 and then applying Equations (17)- (20).
Our local search algorithm for the DSRFLP explores the reduced swap neighborhood N 1 and the insertion neighborhood N repeatedly. This fact also implies the need to update the matrices A t and B t after performing an insertion move. Like in the case of pairwise interchange of facilities, the matrices are updated using Equations (17)- (20). Assume that facility s is moved from position k to position l in permutation p t . Let i = min(k, l), i = max(k, l), and consider facility v = p t (q) (here, p t stands for the updated permutation). If q < i, then a tvj , j = i, . . . , i − 1, are calculated by Equation (17) and b tvj , j = i, . . . , n, are calculated by Equation (19). Other entries in the vth row of A t and B t remain unchanged. If q > i , then (18) is used for j = i , i − 1, . . . , i + 1 and (20) for j = i , i − 1, . . . , 1. If q = l (in this case, v = s = p t (l)), then first a tvl as well as b tvl are set to 0 and then Equations (17)- (20) are applied. Consider now the remaining case where i q i and q = l. If k < l, then a tvj is set to a tvj = a tv,j+1 and b tvj is set to b tvj = b tv,j+1 for j = k, . . . , l − 1. If k > l, then a tvj is set to a tvj = a tv,j−1 and b tvj is set to b tvj = b tv,j−1 for j = k, k − 1, . . . , l + 1. Moreover, a tvj is calculated by (17) and b tvj by (19) for j = j , . . . , n, where j = l if k < l and j = k + 1 if k > l. Furthermore, a tvj is calculated by (18) and b tvj by (20) for j = j , j − 1, . . . , 1, where j = k − 1 if k < l and j = l if k > l. The step-by-step routines to update the matrices A t and B t can be found in [16].
Algorithm 4 gives the pseudocode of the LS component of the approach. The input to LS includes the solution p from which the search is started. This solution is possibly replaced by a better one during the search process. The returned solution p is locally optimal with respect to two neighborhoods, the reduced swap neighborhood N 1 (p) and the insertion neighborhood N(p). The variable f stores the objective function value of the solution p. The algorithm starts with the initialization of the matrices C, C − , C + , X, D t , and B t , t = 1, . . . , m. Since B t depends on the matrix A t , the latter, for each t ∈ {1, . . . , m}, is initialized as well. Then, the algorithm proceeds iteratively. At each iteration, it first repeatedly explores the reduced swap neighborhoodÑ 1 (p) of the current solution p until a locally optimal solution is reached (Lines 4-8). Then, the procedure explore_insertion_neighborhood is triggered. It either returns an improved solution (if ∆ * < 0) or says that the solution p is locally optimal with respect to the neighborhood N(p) (if ∆ * = 0). In the latter case, the algorithm terminates. We apply the explore_swap_neighborhood procedure first and explore_insertion_neighborhood second because the number of possible swap moves is expected to be much less than the number of possible insertion moves. This is because the swap operation is restricted on pairs of facilities with equal lengths. if ∆ * < 0 then 12: f := f + ∆ * 13: µ := true 14: end if 15: end while 16: return f // possibly improved solution p is also returned The pseudocode of the procedure explore_swap_neighborhood is given in Algorithm 5. The nested loops in Lines 2-10 implement Formulas (15) and (16). An improving move is represented by the triplet (t * , k * , l * ). If such a move has been detected, then the positions of the selected facilities are swapped in p t * (Line 12). This step is followed by updating the matrices used in the calculation of the move gain (Line 13). The following statement shows the efficiency of the procedure. for each pair s = p t (k) ∈ H and u = p t (l) ∈ H such that l > k and L s = L u do 4: Compute ∆ := ∆(p, k, l) by (15) if l > k + 1 or by (16) if l = k + 1 5: if ∆ < ∆ * then 6: ∆ * := ∆ 7: Set t * := t, k * := k, and l * := l 8: end if 9: end for 10: end for 11: if ∆ * < 0 then 12: Swap positions of facilities p t * (k * ) and p t * (l * ) in p t * 13: Update matrices C, C − , C + , X, D t * , and B t * 14: end if 15: return ∆ * Proposition 3. The computational complexity of the procedure explore_swap_neighborhood is O(mn 2 ).
Proof. Observe that loops 2-10 run in O(mn 2 ) time. This is implied by the fact that the time complexity of the ∆ calculation (Line 4) is O(1). Other parts of the procedure have less complexity. The procedure performs O(n 2 ) operations to update matrices D t * and B t * , O(n) operations to update matrices C, C − , and C + , and O(1) operations to update matrix X.
We remark that the computational complexity of Algorithm 5 asymptotically matches the size of the neighborhoodÑ 1 , which is O(mn 2 ). Thus, the neighborhood exploration is performed in an efficient way. When the number of planning periods m is a constant, the running time of Algorithm 5 reduces to O(n 2 ). As an alternative to the described procedure, one might use Equation (14) instead of (15) and (16) in Line 4 of Algorithm 5. However, the worst case complexity of such an implementation would be O(mn 3 ).

Computational Results
In this section, we report on the results of computational tests to assess the performance of the developed variable neighborhood search algorithm for solving the DSRFLP. The effectiveness of the approach is evaluated by comparing the VNS against the simulated annealing algorithm proposed byŞahin et al. [2].

Experimental Setup
The VNS algorithm described in previous sections was coded in the C++ programming language. For comparison purposes, we also coded the simulated annealing algorithm of Sahin et al. [2]. These authors used the name SA-R to refer to this algorithm. We keep this name. We ran SA-R with the parameter settings used in [2]. However, to be able to apply a time-based stopping criterion, we implemented SA-R as a multistart algorithm. Each time SA-R starts with a randomly generated solution. The experiments with VNS and SA-R were carried out on a PC with an Intel Core i5-9400F CPU running at 2.90 GHz.
Since research on the DSRFLP is in its infancy, there is no available benchmarks in the literature. Therefore, we performed our experiments on a set of randomly generated problem instances. The entries of the cost matrix (ψ su ) and the material flow matrices (φ tsu ), t ∈ {1, . . . , m}, in these instances, are random numbers drawn uniformly between 1 and 5 and between 1 and 10, respectively. The costs associated with rearranging facilities are randomly and uniformly sampled from the interval [250, 500] if n 100 and interval [1000,2000] if n > 100. The length of each facility is a random integer number between 1 and 5. The generated instances have from 10 to 200 facilities. The number of planning periods is either 3 or 5. The dataset is publicly available (https://drive.google.com/file/d/1P36xdv4 QpZxFmROhAa8Z2dOXU9zJOJKl/view?usp=sharing, accessed on 1 May 2022). Here, it is important to mention that, with the increase in the number of facilities, a more realistic layout scenario is to place facilities in a multi-row configuration. Our main reason for using single row layout problem instances with a large number of facilities (with n > 100) was to more comprehensively evaluate the performance of the described algorithms.
In our computational experiments, we ran both VNS and SA-R 10 times per instance. Maximum CPU time limits for a run of an algorithm were as follows: 180 s for n 20, 600 s for n = 30, 1800 s for 40 n 60, and 3600 s for n 70. To measure the performance of the algorithms, we use the following statistics: the best objective function value from the 10 independent runs; the average objective function value for 10 runs; and the average time for reaching the best result.

Parameter Settings
In Section 2, we described two versions of our VNS implementation, VNS1 and VNS2. The first of them starts with a randomly generated permutation of facilities. Its parameters are ρ, z min , and θ (see Section 2). The second version (VNS2) starts with an initial solution produced by a VNS heuristic applied on an SRFLP instance. The run time of this heuristic is controlled through parameter β. Certainly, the parameters ρ, z min , and θ apply to VNS2, too.
To find good parameter settings for ρ, z min , and θ, we examined the performance of VNS on a training sample consisting of 10 DSRFLP instances with n = 60, 70, 80, 90, and 100 and m = 3 and 5. These instances were randomly generated as described in Section 4.1. Of course, the training sample is disjoint from the main dataset, which is reserved for the final testing stage. In each experiment, we ran several configurations of VNS. To evaluate them, we used the following formula where F denotes the objective function value achieved by the tested configuration and F min represents the minimum objective function value obtained by all configurations. The sum in (23) is taken over all instances in the sample. During the parameter tuning phase, we set the cutoff time of VNS to 300 s per instance.
In the stage of preliminary experimentation, we ran VNS1 and VNS2 using various combinations of the values of the parameters ρ, z min , and θ. We observed that the performance of VNS1 and VNS2 was not very sensitive to the choice of these parameters. Therefore, we relied on a simple parameter setting procedure for our algorithm. Based on early tests, we first identified a range of potential values for each parameter. Then, we allowed one parameter to take on different values from its range while keeping the other parameters fixed at reasonable values chosen on the basis of preliminary numerical experiments.
We started our numerical analysis by investigating the influence of the parameter ρ on the performance of the VNS algorithm. This parameter is used to define the maximum size of the neighborhood in the search process. We ran VNS with ρ ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1, 1.5, 2}. The results are plotted in Figure 4. We see that the best performance of VNS was for ρ 0.4, with a slight edge to ρ = 0.3. Therefore, we fixed ρ at 0.3 for all further experiments with VNS. We then examined the following five values of the parameter z min : 1, 3, 5, 10, and 20. Figure 5 shows that the algorithm was fairly robust to the choice of z min . We decided to fix z min at 3. Furthermore, we investigated the effect of the parameter θ on the performance of VNS. We ran the algorithm with θ = 1, 2, 3, 4, 5, 7, 10, and 20. From Figure 6, we see that a bad choice is to set θ = 1. The best values of θ appeared to be 2 and 5. The results were very similar between VNS configurations with other tested values of θ. Based on the obtained results, we fixed θ at 5.  As we alluded to at the beginning of this section, the VNS2 version of the algorithm makes use of the time share parameter β. This parameter represents the proportion of time allotted to a heuristic for providing an initial solution. We tested the values of β from 0 to 0.06 in increments of 0.01. In addition, we ran VNS2 with β = 0.08, 0.1, and 0.2. The results are displayed in Figure 7. We observe that the best performance of VNS2 was achieved for β ∈ {0.04, 0.05, 0.06}. We elected to set β to 0.04 for all further experiments with VNS2 reported in this paper.

Computational Results for Smaller Sized Instances
We first report the computational experiments on a set of DSRFLP instances of size up to 100 facilities. Table 2 compares the best results achieved by VNS1, VNS2, and SA-R. Its first column contains the instance names. The first integer in the name gives the number of facilities and the second integer gives the number of planning periods. In the next columns, F * is the objective function value of the best solution out of 10 runs. The average results of VNS1, VNS2, and SA-R are listed in Table 3. The two columns for each algorithm contain the average objective function value of 10 solutions, denoted asF, and the average time (in seconds) taken to find the best solution in a run. The bottom row of Tables 2 and 3 shows the results averaged over all 20 problem instances. The best value of F * (in Table 2) andF (in Table 3) for each instance is highlighted in boldface. Table 2. Best results of VNS1, VNS2, and SA-R for smaller sized instances.

SA-R [2]
VNS1 VNS2 table. The superiority of VNS over SA-R is also evidenced by the statistics presented in the last row of the tables, where we show the averaged results for each algorithm.  Comparing VNS1 and VNS2, we find that VNS2 exhibits better performance than VNS1 in terms ofF, but is slightly worse in terms of F * . To more rigorously assess the results, we apply the Wilcoxon signed-rank test. The comparison results are summarized in Table 4. The first column indicates which objective function values are compared: best solution values (F * in Table 2) in the first row and average solution values (F in Table 3) in the second row. The next three columns show comparison results: #wins, #ties, and #losses count the number of instances on which VNS2 found a better, an equally good, or an inferior solution than VNS1. The p-values from the Wilcoxon test are estimated in the penultimate column. We use a standard significance level of 0.05 to judge whether a significant difference exists between the algorithms. The value "Yes" means that the average results of VNS2 are better than those of VNS1, while "No" means that there is no statistical difference between the two algorithms when the comparison is based on the F * values. The average running time taken by each algorithm to reach the last improvement in solution quality is shown in Table 3. We see that SA-R took less time to find the best solution in a run than VNS1 and VNS2. The running times of the latter two algorithms are quite comparable. Part (a) of Figure 9 shows the convergence speed of the tested algorithms for problem instance p-100-5. For each algorithm, a plot of the objective function value of the best solution versus computational time is provided. The first point plotted for VNS2 represents the solution generated by a VNS heuristic for the SRFLP and improved by running LS once. From the figure, we see that SA-R stops improving the best solution earlier than VNS1 and VNS2.

Computational Results for Larger Sized Instances
Our second experiment aimed to assess the performance of algorithms on a set of DSRFLP instances with the number of facilities ranging from 110 to 200. The results are reported in Tables 5 and 6. They have the same structure as Tables 2 and 3. The findings from the experiment slightly differ from those discussed in the previous section. The differences are not only due to an increase in the size of problem instances. We used slightly different parameters to generate instances of size n > 100. The rearrangement costs for these instances are four times higher than in the case of instances of a size up to 100 (see Section 4.1).
Perhaps the main observation from Tables 5 and 6 is the overwhelming superiority of VNS2 over the other two algorithms. Basically, VNS2 dominated SA-R and VNS1 across all the 20 problem instances. By contrasting the penultimate column of Table 6 with the second and third columns of Table 5, we can see that the average result of VNS2 is better than the best SA-R (or VNS1) result for all instances except p-130-3. Comparing SA-R with VNS1, it can be noticed from the bottom rows of the tables that VNS1 found better "best" solutions ( Table 5) and SA-R produced better solutions on the average ( Table 6). The boxplots for the four DSRFLP instances with n > 100 are shown in Figure 10. They confirm the effectiveness of VNS2 in comparison with the other evaluated algorithms.

SA-R [2]
VNS1 VNS2 Table 7 summarizes comparison results between SA-R and VNS1. Regarding the average quality of solutions, the Wilcoxon signed-rank test demonstrated a statistically significant difference in favor of SA-R. However, there was no significant difference between the results of SA-R and VNS1 in the case of the best solutions.
In Table 6, we also report the average running time of the tested algorithms. We see that SA-R found the best solution in a run earlier than the VNS configurations. We notice that VNS2, and especially VNS1, obtained an improved solution in a situation where the time limit of 1 h on a run was close to expiring. In part (b) of Figure 9, we compare the convergence speed of the algorithms for the problem instance p-200-3. We increased the cutoff time for a run to 5 h. However, as we can see from the figure, after 3 h of execution, the improvement in solution quality was very marginal.   We finalized this section with a couple of remarks concerning the performance of the tested algorithms. We experimentally compared our two algorithms (VNS1 and VNS2) for the DSRFLP. The results clearly indicate that VNS2 is our key algorithm in this study. We compared VNS2 with the simulated annealing algorithm (SA-R) from the literature. Experiments showed that the performance of VNS2 was superior to that of SA-R.

Analysis of Local Search Variants
To demonstrate the computational efficiency of our LS component of the algorithm, we experimentally compared it with alternative local search implementations. One idea was to abandon the use of the swap operator and base the search entirely on insertion moves. Other attempts were directed towards simplifying the gain calculation process by replacing the use of Propositions 1 and 2 with Equations (7), (9), and (14). Each of the resulting procedures, however, should not be considered as an independent algorithm. Basically, these procedures can be treated as modifications of VNS2. They are obtained by making small changes to the LS part of VNS2. We investigated alternative LS implementations in order to justify various design choices made in the construction of the VNS2 algorithm. To assess the performance of the variable neighborhood search algorithm with alternative LS approaches, we used the main VNS2 variant (described in Sections 2 and 3) as a reference method.

Usefulness of Swap Moves
We numerically analyzed the performance of a VNS2 version in which the LS procedure does not employ the swap neighborhood structure. We refer to this version as VNS2a. Basically, VNS2a is obtained from VNS2 by deleting Lines 4-8 in Algorithm 4.
To avoid unnecessarily long computations, we performed the comparison between VNS2 and VNS2a on a set of DSRFLP instances of Section 4.1 with n 100. The comparison results are shown in Figure 11. On the x axis, F VNS2a − F VNS2 represents the difference in the F * values between VNS2a and VNS2 in the bars labeled "Best", and the difference in theF values between VNS2a and VNS2 in the bars labeled as "Average". We provide results for problem instances of a size of at least 40. For smaller instances, the two versions of VNS2 either obtained the same solution or the difference between the objective function values was relatively small. We see in the figure that VNS2a found a better solution than VNS2 for p-40-5. The objective function value achieved by VNS2a for this instance is 1,982,841.80. This value, however, is bigger than that reported by VNS1 (see Table 2). We also observed that, for all problem instances of size greater than 40, VNS2 produced better solutions than VNS2a. The difference F VNS2a − F VNS2 averaged over all 20 instances was 8486.30 in the case of the F * values and 11,322.90 in the case of theF values. From the figure, we can conclude that the swap neighborhood exploration procedure (Algorithm 5) is an important component of the approach. This helps significantly improve the quality of solutions produced by VNS. Best Average Figure 11. Difference in the solution values between VNS2a (that is, VNS2 with no swap operator) and the VNS2 algorithm.

Benefit of Fast Neighborhood Exploration
The purpose of this section is to show that our idea to calculate the move gain in constant time is fruitful. To this end, we consider the following variations of the VNS2 algorithm: • VNS2b obtained from VNS2 by replacing the equations of Proposition 1 with Equations (7) and (9); • VNS2c obtained from VNS2 by replacing Equation (15) with Equation (14); • VNS2d obtained from VNS2 by replacing the equations of Propositions 1 and 2 with Equations (7), (9), and (14).
Since the VNS2 algorithm employs fast neighborhood exploration procedures, it was expected that the above-listed alternative VNS2 versions could not improve the results obtained by VNS2. Our experiment confirmed this expectation. As in Section 5.1, we ran the algorithm on problem instances of size n 100. The results of solving these instances are reported in Figures 12 and 13. The differences shown in these figures for each VNS2 version were obtained similarly to those in Figure 11. We note that all versions of VNS2 managed to find the best solution for the first five instances in the dataset and therefore the results are only provided for p-30-5 and all instances with n 40. Figure 12 depicts the performance differences calculated for the F * values and Figure 13 shows the performance differences obtained for theF values. As it is obvious from the figures, VNS2 produced better solutions than VNS2b, VNS2c, and VNS2d for all problem instances. The average values of F VNS2b − F VNS2 , F VNS2c − F VNS2 , and F VNS2d − F VNS2 in Figure 12 are 1146.54, 1305.63, and 1580.42, respectively, and those in Figure 13 are 1045.29, 1322.69, and 1686.44, respectively. As expected, VNS2d obtained worse solutions than VNS2b and VNS2c. In general, it can be concluded that the use of the proposed neighborhood exploration procedures has a large impact on improving the performance of the variable neighborhood search method for solving the DSRFLP. 5000 0 10000 15000

Concluding Remarks
In this paper, we developed a variable neighborhood search algorithm for solving the dynamic single row facility layout problem. The effectiveness of this algorithm strongly depends on the quality of the adopted LS strategy. The main contribution of this work is an LS algorithm based on the fast neighborhood exploration procedures developed for swap and insertion neighborhood structures. Provided that the number of planning periods is a constant, these procedures have O(n 2 ) time complexity. In one version of VNS, a starting solution is generated by performing the short run of a VNS procedure for solving the SRFLP. An instance of the latter is constructed from an instance of the DSRFLP. Another version of VNS starts with a randomly generated solution.
Both VNS versions were numerically compared against the SA approach ofŞahin et al. [2], which is the state-of-the-art algorithm for the DSRFLP. Computational experiments were conducted on problem instances of size up to 200 facilities and 3 or 5 planning periods. The results indicate that VNS with a heuristically constructed initial solution outperformed the SA algorithm. The superiority of VNS over SA is more pronounced for DSRFLP instances of a size greater than 100.
Additional experiments were carried out to show the effectiveness of the proposed LS procedure. A conclusion was reached that it is advantageous to explore the swap neighborhood, and not to restrict LS to performing insertion operations only. Another conclusion is that our methods to calculate the move gain largely outperformed traditional techniques.
As a general conclusion, we may note that the DSRFLP is a very difficult combinatorial optimization problem. It is much harder than the SRFLP. We experience that, for large-size instances in our test suite, the best solutions obtained are likely not the best possible. To find improved solutions using VNS, a big amount of CPU time may be required. An obvious direction for further research is to develop new powerful metaheuristic-based algorithms for solving the DSRFLP. One idea is to combine SA and VNS into a single approach. This hybridization strategy was successfully applied to several permutationbased problems, for example, for solving the profile minimization problem [58]. Another promising idea seems to be developing evolutionary algorithms for the DSRFLP. Such algorithms may use the proposed LS procedure as a means for search intensification. In general, due to its complexity and simplicity in formulation, the DSRFLP can serve as a good candidate problem for the performance evaluation of newly introduced metaheuristic optimization methods.
Another possible area of future research is to design and implement metaheuristic algorithms for dynamic versions of some other facility layout problems. In particular, a research effort could be devoted to developing such algorithms for a dynamic formulation of the multi-row facility layout problem. Loop layout problems are another example for which dynamic models can be considered. Finally, the research can also be directed towards the development of new optimization techniques for dynamic parallel row ordering.

Conflicts of Interest:
The authors declare no conflict of interest.