A Primal-Dual Interior-Point Method for Facility Layout Problem with Relative-Positioning Constraints

: We consider the facility layout problem (FLP) in which we ﬁnd the arrangements of departments with the smallest material handling cost that can be expressed as the product of distance times ﬂows between departments. It is known that FLP can be formulated as a linear programming problem if the relative positioning of departments is speciﬁed, and, thus, can be solved to optimality. In this paper, we describe a custom interior-point algorithm for solving FLP with relative positioning constraints (FLPRC) that is much faster than the standard methods used in the general-purpose solver. We build a compact formation of FLPRC and its duals, which enables us to establish the optimal condition very quickly. We use this optimality condition to implement the primal-dual interior-point method with an efﬁcient Newton step computation that exploit special structure of a Hessian. We conﬁrm effectiveness of our proposed model through applications to several well-known benchmark data sets. Our algorithm shows much faster speed for ﬁnding the optimal solution.


Facility Layout Problem
Facility Layout Problem (FLP) is one of the most fundamental issues in the design of production system. The goal of FLP is "locating the different facilities or departments in a plant in order to achieve the greatest efficiency in the production of a product or service" (Tompkins et al. 2010 [1]). In an industrial situation, putting in a poor layout may well lead to serious blunders in the use of a company's available land, in costly re-arrangements in actually tearing down buildings, walls, or major structures which are still usable but which subsequently turn out to be roadblocks to efficiency and low-cost operation [2]. Therefore, it is important for the facility planners to develop the facility layouts that avoid such mistakes.
Early research in this field include Koopmans and Beckmann (1957) [11], where the FLP is formulated as QAP (Quadratic Assignment Problem), by approximating the problem as model of assigning all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows. In the 1990s, several exact formulation was presented by representing the candidate layout continuously (called 'continuous representation'), as opposed to the QAP formulation represents the layout candidate by a grid structure (called 'discrete representation'). According to Liu and Meller (2007) [12], "By representing the FLP in a discrete fashion, the FLP is simplified, but at the penalty of eliminating many solutions from consideration. Continuous representation is more accurate and realistic than discrete representation, and thus is capable of finding the 'real optimal' final layout solution." However, the continuous representation also increases the complexity of the FLP. In the optimization point of view, FLP belongs to the non-convex problem, and, unfortunately, there are no effective methods for solving the problem in this class. Methods for the general non-convex problem therefore take several different approaches: NLP (NonLinear Programming)-based approach; MH (Meta-Heuristics)-based approach; and MIP (Mixed-Integer-Programming)-based approach, each of which involves some compromise.
In the NLP-based approach, FLP is represented by purely continuous variables and optimized by NLP techniques. The compromise is to give up seeking the optimal solution, and instead they seek a point that is only a locally optimal solution. As a result, the objective value of the local solution obtained is highly dependent on the choice of an initial point. Hence, much of the research is focused on a method for producing a good enough initial point. Those research include Van Camp, Carter and Vaneli (1992) [13], Anjos and Vaneli (2002) [14], Anjos and Vaneli (2006) [15], Jankovits, Luo, Anjos and Vaneli (2011) [16], and Anjos and Vieira (2016) [17]. The general framework is based on the combination of two models: The first is a relaxation of the layout problem and intended to find a good initial point for the iterative algorithm used to solve the second model; the second model is an exact formulation of the facility layout problem and intended to find locally optimal solution. These methods can be fast, can handle large-scale problems, but still have the disadvantage of initial-dependency even with the relaxation step, since the relaxed model that has been proposed is not convex as well.
In the MH-based approach, FLP is represented by a string of finite length integervariables and optimized by MH techniques (e.g., Simulated Annealing or Genetic Algorithm). The compromise is the quality of solution; an approximate solution is found by drawing some number of candidates from a probability distribution, and taking the best one found as the approximate solution. This approach includes LOGIC and FLEX-BAY. LOGIC is an improvement type algorithm developed by [18], where a candidate layout is represented as a slicing tree. A slicing tree is a binary tree which shows the process of recursive partitioning rectangular in such a way that each rectangular partition corresponds to the space allocated to a facility. In the LOGIC, the slicing tree is optimized using a Genetic Algorithm, and further extensions are conducted by Gau and Meller (1999) [19], Shayan and Chittilappilly (2004) [20], Scholz, Petrick, and Domschke (2009) [21], and Kang and Chae (2017) [22]. FLEX-BAY is an improvement-type algorithm based on a continuous representation developed by Tate and Smith [23]. A layout is represented by a flexible number of vertical bays of varying width, each divided into one or more rectangular departments. Encoding flexible bay layouts is a two-part representation: permutation of the departments and breakpoints for the bays. FLEX-BAY utilizes a genetic algorithm to search the solution space by varying department-to-bay assignments or by adding or removing a bay breakpoint. A family of this approach are Kulturel-Konak, Konak, Norman and Smith(2006) [24], Wong [29], and Garcia-Hernandez et al. (2020) [30]. Recent papers study the multi-objective model. Liu et al. (2018) [31] developed a multi-objective model to minimize material handling cost, to maximize the total adjacency value, and to maximize space utilization. They applied a multi-objective particle swarm optimization. Guan et al. (2019) [32] developed a multi-objective model to minimize material handling cost, to minimize the number of available workshops required for departments, and to maximize space utilization. They also applied a multi-objective particle swarm optimization. Another research trend is the dynamic and stochastic model. Pourvaziri and Pierreval (2017) [33] developed a dynamic facility layout problem based on an open queuing network theory. Turanoglu and Akkaya (2018) [34] introduced a hybrid heuristic algorithm based on bacterial foraging optimization. Tayal et al. (2017) [35] developed a stochastic dynamic facility layout problem. Several studies integrated these multi-objective and dynamic models. Azevedo et al. (2017) [36] developed a multi-objective model to minimize material handling cost, to minimize reconfiguration cost, and to minimize unsuitability between departments and location. Pourhassan and Raissi (2017) [37] developed a multi-objective model to minimize the total material handling cost and machine rearrangement costs, and to minimize the number of possible transporters accident that can be evaluated by the simulation. They proposed a simulation-based optimization framework in which the genetic algorithm was applied to find an optimal arrangement. While MH-based methods work very well in practice for larger sized problem instances, these approaches have a structural disadvantage in which all these heuristics cannot consider all-feasible solutions due to the encoding scheme that represents solutions as strings for the use of MH techniques.
In the MIP-based approach, FLP is represented by a combination of the binary variables that specify the relative positioning of each pair of departments and the continuous variables that denote the department positioning and shapes. The outline of a general MIP-based-method is as follows. It alternates between two steps: determining a relative position of departments, and the optimizing department positions and the shapes under the specified relative positioning. If the relative positioning of the department is specified, FLP can be reduced to the convex optimization problems such as LP (Linear Programming), QP (Quadratic Programming), or SDP (SemiDefinite Programming), and, thus, can be solved by several convex optimization techniques developed in the past few decades (e.g., simplex method, active-set method, interior-point method). Unlike other two approaches which may miss the optimal solution, MIP-based approach is the true global optimization approach, and, thus, is absolutely reliable. The compromise is, however, the scalability; the first MIP-based method presented by Montreuil in 1990 [38] can only solve the problem with six or less departments. This model is now referred to as FLP0 and several modification for MIP-FLP formulation has been conducted such as FLP1 presented by Meller et al. (1999) [39]; FLP2 by Sherali et al. (2003) [40]; and ε-accurate scheme for controlling the department area by Castillo and Westerlund (2005) [41]; a sequence-pair representation by Liu and Meller (2007) [12]; a graph-pair representation by Bozer and Wang (2012) [42]. Even with those improvements, however, the problem size that can be solved to optimality is still less than twelve (Chae and Regan 2016 [43]).

Research Purpose
Corresponding to the above-mentioned two steps of MIP, there are two approaches for speeding up MIP: searching the relative position more efficiently, or optimizing department positions and the shapes under the specified relative positioning faster. In this study, we focus primarily on the latter class approach.
In this paper, we describe a custom interior-point method for solving the FLP with relative positioning constraints (FLPRC) that is substantially faster than the standard methods used in the general-purpose solver. We build a compact formation of FLPRC and its dual, which enables us to establish the optimal condition very quickly. We use this optimality condition to implement the primal-dual interior-point method with an efficient Newton step computation that exploits the special structure of a Hessian. The computation effort of our method scales with linear-order of the number of departments, whereas that of standard methods scales with cubic-order of the number of departments.
The outline of this paper is as follows. In Section 2, we describe the FLPRC, formulated as a linear programming in a matrix form. In Section 3, we describe the overall algorithm. We first derive a dual problem and the Karush-Kuhn-Tucker (KKT) optimal condition of the model. We use this KKT condition to implement our optimization method with a primal-dual interior-point. We describe the barrier subproblem associated with a primaldual interior-point method for the FLPRC, and we show how the special structure of the FLPRC can be exploited to compute the search direction very efficiently. In Section 4, we give numerical results to illustrate the effectiveness of proposed method. Finally, we provide our conclusions and discuss some of the future research in Section 5.

Facility Layout Problem with Relative Positioning Constraints
In this section, we first review the FLPRC based on the FLP2 presented by [40]. We then describe our modified model and how our model can be faster than the FLP2.

The FLPRC Formulation
We consider N departments i = 1, · · · , N, which have centroid positions (x i , y i ) ∈ R 2 and shapes (w i , h i ) ∈ R 2 of half of their widths and heights, for i = 1, ..., N. They must be placed in a rectangle area with width W and height H, and lower left corner at the position (0, 0), under specified relative positioning.
The objective of FLP is to find a set of positions and shapes of department r i = [x i , y i , w i , h i ] T that yield as small material handling costs as possible. In the FLP2, the material handling cost is measured as the product of distance times flows between departments as in (1) minimize where f ij denotes the material flows from department i to department j and d ij denotes the distance between centroids of department i and department j. The choices of distance measures are usually either rectilinear or Euclidean, but, in the FLP2, the rectilinear distances are used as in (2).
The absolute values in the distance function (2) can be linearized as in (3) − We call these inequalities the distance constraints.
In the FLP, it is required that the departments lie inside the bounding rectangle with width W and height H as in (4) and (5).
These inequalities are called the within-boundary constraints.
The aspect-ratio constraints require that the aspect ratio of a department is bounded within upper and lower limits, i.e., where aspect ratio of a department is defined by h i /w i and the upper and lower bound of aspect ratio are defined by u i , l i , respectively. Multiplying both sides of each inequality by w i reduces to these constraints into linear inequalities as in (7).
In case the dimension of a department is known a priori (often called Machine Layout Problem, or MLP), we can use the fixed aspect ratio with l i = u i , which results in a linear equality constraint. Let s i be the quarter of minimal area of department i for i = 1, ..., N, thus the area constraints require Equation (8): Instead of these exact constraints, we use an outer polyhedra approximation, proposed by [40], to obtain the speed-up as (9): where affine support points are defined byw im and the number of affine support points are defined by M, respectively. By using the constraints (9), instead of (8), the area constraints can be reduced to the standard linear inequalities, and thus can be solved much faster. The relative-positioning constraints require that, for each pair of departments (i, j) with i = j, the department i must be placed to the left, right, above, or below the department j. These relations are specified with the binary variables H (meaning 'horizontal') and V (meaning 'vertical') as in (10) and (11): Using these variables, the following linear inequalities are imposed as in (12) and (13): Although we will not determine the relative positioning, we mention that the following condition must hold to ensure that the departments do not overlap (14): These constraints are included in the MIP-based FLP and make the problem a complicated combinational problem. We summarize the overall problem formulation as in (15). In this section, we describe the redundant inequalities that can be eliminated by exploiting relative-positioning constraints structure. It yields more sparsity of the matrix in the linear equation to compute the Newton step, which is the main effort of the algorithm, and thus yields the speeding-up the algorithm.

Redundancy in Distance Constraints
For each pair of departments (i, j) with i = j, two inequalities in (12) and (13) can be reduced to a linear equality, if H ij and V ij are specified as follows: Substituting these equalities into d x ij , d y ij in the objective function (1), these variables and equalities disappeared.
Assuming the non-overlapping condition (14) holds, at least N C 2 variables and 2 × N C 2 inequalities can be reduced. We can further reduce the distance variables and constraints in the similar way, for each pair of departments (i, j) with f ij = 0.

Redundancy in Within-Boundary Constraints
The within-boundary constraints are also reduced by using the relative-positioning structure. The constraints w i ≤ x i in (4) only needs to be imposed on the left-most departments, i.e., for ∀i s.t. ∑ N j=1 H ij = 0. In the similar way, the minimal set of withinboundary constraints can be reduced as follows: Suppose N le f t , N right , N upper , N lower are the number of the most left, right, upper, and lower departments, respectively, the number of inequalities can be reduced to N le f t + N right + N upper + N lower . Assuming that the non-overlapping condition (14) holds, this number becomes, at most, less than 2(N + 1), which is less than the original 4N inequalities.

Redundancy in Relative-Positioning Constraints
Finally, we mention the redundant inequalities in relative-positioning constraints. It is obvious that, if the conditions H ij = 1 and H jk = 1 hold, then the condition H ik = 1 must hold. Using these relations, we can reduce the redundant inequalities.
A minimal set of relative-positioning constraints can easily be obtained through eliminating the indirect path in H ij or V ij by the Warshall-Floyd algorithm.

The FLPRC Formulation in a Matrix Form
In this section, we introduce simpler formulation in a matrix form. This formulation does not reduce the total number of variables nor constraints, but it removes many row/column permutation to solve the linear equation to compute the Newton step.

Notation
First, we describe the notation in this section. 0 m denotes the (m × 1) vector with all entries zero, 1 m denotes the (m × 1) vector with all entries one. I means the identity matrix, and diag(a) means diagonal matrix with diagonal entries a 1 · · · a m , where a i is i-th entry of a.

Decision Vector
Let x, y, w, h ∈ R N denote the vector with i th entry x i , y i , w i , h i , respectively. We define the layout vector r ∈ R 4N as

Objective Function
We define the flow vector f ∈ R N(N−1)/2 as Using these vectors, we can write the objective function (1) as in (16): If a pair of department (i, j) can be reduced with H ij , V ij from the redundancy in distance constraints, then the cost vector becomes and the kth entry corresponding (i, j) in f becomes 0.

Distance Constraints
We define the distance constraints matrix A d ∈ R N(N−1)×4N as and A di is defined as Using these matrices, we can write the distance constraints (3) as in (18).
If there are redundant constraints that can be eliminated, the corresponding rows are eliminated. Instead, the corresponding term is represented by the cost vector (17).

Within-Boundary Constraints
We define the within-boundary constraints matrix A wb ∈ R 4N×4N and the withinboundary constraints vector b wb ∈ R 4N as follows: Using these matrices, the within-boundary constraints can be written as in (19).
If there are redundant constraints that can be eliminated, the entries of corresponding department become 0.

Aspect-Ratio Constraints
The aspect-ratio constraints (7) can be written as We can express these bounds in a matrix form as or simply we can write as in (20) A ar r ≤ b ar (20) where A ar ∈ R 4N×4N , L ∈ R N×N , U ∈ R N×N , b ar ∈ R 4N are defined as follows: The approximated area constraints can be written as follows: This can be compactly written as follows: where are defined as follows:

Relative-Positioning Constraints
The relative-positioning constraints in a matrix form can be written as where A rp ∈ R N(N−1)×4N , b rp ∈ R N(N−1) are defined as follows: For a department pair (i, j) with H ij = 1, in mth row corresponding (i, j), the coefficient of x i , w i , w j becomes 1 and that of x j becomes −1. Similarly, for a department pair (i, j) with V ij = 1, in mth row corresponding (i, j), the coefficient of y i , h i , h j becomes 1 and that of y j becomes −1. We can eliminate the row corresponding the redundant inequalities that can be reduced.

Dual Problem and Optimality Conditions for FLPRC
In this section, we derive a dual problem for the FLPRC (23) and outline the optimality condition. We introduce Lagrangian Multiplier vectors λ = [λ 1 , · · · λ (5/2N 2 +(11/2+M)N) ] T ∈ R (5/2N 2 +(11/2+M)N) for the constraint corresponding to a T i v ≤ b i , where a T i is i-th row vector of the constraints matrix A. We let n λ = dim(λ) = (5/2N 2 + (11/2 + M)N) denote the dimensions of λ. The Lagrangian is then To obtain the dual function, we minimize L over the primal variables v. We find that the minimum is -∞, unless c + ∑ n λ i=1 λ i a i ≤ 0. The dual function is therefore given by Thus, we can write the dual of the FLPRC as The optimality condition, i.e., the Karush-Kuhn-Tucker (KKT) condition, is given as The first equation (26a) that follows from the gradient of Lagrangian must be zero, since the optimal point v* minimizes the L(v, λ) over v. The second equation (26b) is the complementary slickness. The third and the last inequality (26c) and (26d) are the constraints with respect to the primal and dual feasibility, respectively.
If (and only if) there are (v * , λ * ) that satisfies the KKT condition (26), then the v * is the optimal point. In other words, solving LP (23) is equivalent to the problem of finding the point (v * , λ * ) that satisfies the KKT conditions. As is the case in most other convex problems, however, it is difficult to directly find such points due to a large number of linear inequalities to maintain the primal and dual feasibility in the KKT conditions (26).
The interior-point method is a technique used to solve those inequality-constrained problems by reducing it to a sequence of problems without inequalities. The details are described in the next section.

Custom Interior-Point Method for Solving FLPRC
In this chapter, we describe an efficient method for solving FLPRC (23). The method is primal-dual interior-point method, which is one of the most powerful techniques for solving Linear Programming (LP) or other classes of convex optimization programming. We propose an efficient Newton step computation that exploit the special structure of the problem. Our algorithm is also applicable to other types of interior-point methods with a minor change.

Barrier Subproblem
The main idea of the interior-point method is casting the inequality constraints into the objective function as where t is the scaling parameter, and φ(v) is defined as The problem is called a barrier subproblem and the function φ is called a barrier function. This function is convex and smooth, and, thus, the barrier subproblem becomes convex if the original problem is convex. The first derivative of this function is We define z with elements The optimality condition for the barrier subproblem is Comparing this with the gradient of Lagrangian (26a), . (32) Substituting this into the complementary slackness (26b) and eliminating linear inequalities (26c) and (26d), we have the modified KKT condition: The modified KKT conditions (33) satisfy the following two properties: • These conditions are far easier to solve than the KKT condition (26) • As t increases, these condition approaches the KKT condition (26).
Combining these two properties, the FLPRC is solved to optimality. The modified KKT conditions (33) are solved by the Newton method, which is described in detail in the next section.

Newton Method
We rewrite the modified KKT conditions (33) as in (34).
In a primal-dual interior method, we start with some initial point (v 0 , λ 0 ) and update them in such a way that At each iteration, primal and dual search directions (∆v k , ∆λ k ) and step size γ k are computed, then the new point is obtained as (v k+1 , λ k+1 ) = (v k + γ k ∆v k , λ k + γ k ∆λ k ). In the Newton method, the search direction is characterized by the solution of the linear equation where The first equation (35) follows from the 1st order Taylor approximation. Thus, the solution of (35) may be a good estimate of the solution of Equation (34). After computing the Newton step, we then carry out the back-tracking line search to find the step size γ k .

Efficient Newton Step Computation
In this section, we show how to compute the Newton step (∆v k , ∆λ k ), i.e., solve Equation (36), efficiently. These equations can be solved using standard methods for linear equations; however, by exploiting the special structure of the equations, they can be solved even faster. The method we describe in this section is based on a sequence of three elimination steps, in which particular blocks of variables are eliminated by the Schur Complement.

1st Elimination Step
Our first step is to eliminate the variables ∆λ k from Equation (35). From the second equation, we obtain The second to third equation is derived using Substituting this into the first equation in (35), we have ∆v k as follows: where Z = diag(z), L = diag(λ). These Equations (37) and (38) are alternative for the original Equation (36). Instead of solving the large set of linear Equations (36), we find ∆v k from the smaller Equation (38), which can be easily solved as described in the next two sections, and then calculate the ∆λ from Equation (37).

2nd Elimination Step
We break down the equation and eliminate some of the blocks further to reduce the size of linear equation. From our definition of v = [r T , d T ] T , and of A, we can rewrite the Equation (37) as follows: and g r , g d are defined as follows: , respectively. (g r , g d ) are derived from the right-hand-side as (40) From the second equation in (40), we have ∆d k as (41).
Substituting this into the first equation in (40), we have ∆r as follows: The sum of the first and forth terms can be summarized as follows: We define D d and g as follows: Using these expressions, we have ∆r as in (42).

Third Elimination
Step Equation (42) can be further broken down into smaller pieces, and solved efficiently. We rewrite Equation (42) as in (43) E∆r k = −g (43) The term A T r D r A r can be factorized as in Equation (44) A where D wb , D ar , D s , D rp are the diagonal matrices with entries with respect to withinboundary, aspect-ratio, area, and relative-positioning constraints. We decompose ∆r k with respect to (∆x, ∆y) and (∆w, ∆h) as ∆r k = [∆r T (xy)k , ∆r T (wh)k ] T . Factorizing (43) into block elements with respect to (∆x, ∆y) and (∆w, ∆h) vectors, we have the following Equation (45) We discuss the (m, n) block element E mn , m = 1, 2, n = 1, 2. We define the (m, n) block element of E d , E r , E wb , E ar , E s , E rp as E d mn , E r mn , E wb mn , E ar mn , E s mn , E rp mn . First, E wb mn , E ar mn , E s mn , m = 1, 2, n = 1, 2 are all diagonal matrices because the block elements of A wb , A ar , D wb , D ar are diagonal matrices. As for E rp mn , the block elements of A rp is a sparse matrix, and D rp is a diagonal matrix, so E rp mn , m = 1, 2, n = 1, 2 is a sparse symmetric matrix. Since E r mn , m = 1, 2, n = 1, 2 is the sum of these diagonal matrices and a sparse symmetric matrix, E r mn , m = 1, 2, n = 1, 2 is a sparse symmetric matrix. On the other hand, for E d mn , since it has only the components related to (x, y), E d 11 is a dense matrix, and the other block elements are E d 12 , E d 21 , E d 22 = O. From the above, by taking the sum of each block element, E 11 is a dense matrix, and the other block elements E 12 , E 21 , and E 22 are sparse symmetric matrices.
Speed-up is achieved using this structure. From the first equation in (45), we can obtain ∆r wh as Equation (46): Substituting this into the second equation in (45), we have Equation (47) to get ∆r xy : Using this elimination, we can compute ∆r (xy)k efficiently, we then compute the remaining variables ∆r (xy)k , ∆d, ∆λ, which is faster than just solving the larger set of equations (33) using a standard factorization method.
Note that all of the matrices are non-singular, if there exists an optimal solution of problem (15).

Complexity Analysis
In this section, we analyze the calculation complexity. The complexity is measured by the number of floating operations (flops), and all of the calculation in this section is based on Boyd and Vandenberghe [44]. Let n v = dim(v) = 4N + N(N − 1) denote the dimension of v. The size of the coefficient matrix of the original Equation (36) is (n v + n λ ). Using the standard LU decomposition, the computational complexity of the Equation (36) is (2/3)(n v + n λ ) 3 . Expanding this, the amount of calculation is expressed as follows: On the other hand, the calculation effort after 1st elimination is performed is 2n 2 v n λ + (2/3)n 3 v . In this equation, the complexity is proportional to n λ . Since n λ > n v , the amount of calculation is greatly reduced. The amount of calculation after 1st elimination is performed is as follows: The calculation effort in the second term can be further reduced by performing 2nd elimination. Let n r = dim(r) = 4N be the dimensions of r, and let n d = dim(d) = N(N − 1) denote the dimensions of d. The calculation effort after 1st elimination is performed is (2/3)n 3 v = (2/3)(n r + n d ) 3 . The calculation effort after 2nd elimination is performed is as shown in 2n 2 r n d + (2/3)n 3 r . In this equation, the complexity is proportional to n d . Since n d > n r in the range of N > 4, the amount of calculation is greatly reduced. The amount of calculation after 2nd elimination that is performed is as follows: The calculation effort in the second term can be further reduced by performing 3rd elimination. Let n xy = dim(x) + dim(y) = 2N denote the dimensions of x and y, and n wh = dim(w) + dim(h) = 2N denote the dimensions of w and h. The calculation effort after 2nd elimination is performed is as shown in (2/3)n 3 r = (2/3)(n xy + n wh ) 3 . The calculation effort after 3rd elimination is performed is n wh + 2n 2 xy n wh + (2/3)n 3 xy . Here, the first term is the amount of calculation when sparse factorization is used by utilizing the fact that the matrix E 22 of Equation (47) is a symmetric and sparse matrix. The third term is a block diagonal matrix. Therefore, it can be decomposed into equations related to x and y and the calculation effort is (2/3)N 3 × 2 = (4/3)N 3 . The amount of calculation after 3rd elimination is performed is as follows: 2n 2 v n λ + 2n 2 r n d + n wh + 2n 2 xy n wh + (4/3)N 3 = 4 5 N 6 + ( 79 5 + 2M)N 5 + ( 526 These complexity is summarized in Table 1. From the above analysis, the problem can be reduced to solving smaller equations by decomposing the matrix rather than solving the original matrix.

Starting-Point Using Flexible Bay Structure
In this section, we present a technique to create the starting-point for further speedingup the computation. While our proposed primal-dual interior point method has enough computation effectiveness in itself, there is a potential advantage in utilizing a 'good' starting point from a practical perspective. In the context of using interior point method, a starting point should be strictly feasible. Therefore, we define a 'good' starting point as points that are close to optimality, yet is sufficiently far from the boundary of the feasible solution. We describe a technique to create such starting point in this section.
Our method is based on a relaxed Flexible-Bay Structure (FBS), in which the departments are located in parallel bays with the varying width associated the total area of the departments in that bay, using two part representation: permutation of the departments and breakpoints for the bays. Figure 1 presents an example illustrating the bay assignment and the obtained layout from the horizontal and vertical relative-positioning graphs.
In our proposal, the permutation and breakpoints are specified using the relativepositioning H ij , V ij , and the shapes of department (w i , h i ) and the positioning of departments (x i , y i ) are determined through the relaxed-FBS decoding scheme. However, those obtained from the FBS procedure is near optimality and feasible, but not strictly feasible in general, since each pair of adjacent departments has no separation between them. Therefore, an additional small clearance ρ (say, 10 −5 ) is used to make them strictly feasible. The outline of starting-point generation using the relaxed-FBS is described as follows: given H ij , V ij , ρ assign department ∀i to the bay k with the horizontal precedence ∑ N i=1 H ij = k. sort department ∀i in the bay ∀k in the order of vertical precedence ∑ N i=1 V ij = k get bay area B k as : B k = ∑ i∈k S i get bay width W B k as : W B k = max ∀i∈k (H/B k , w min i ) get department widths and heights as where ρ ∈ R is an additional clearance between departments. steps: giving the randomly distributed location first, then specifying the H ij , V ij for ∀i, j with i = j as follows: We abandoned infeasible combinations. The FLPRC is a convex problem and the optimal solution can be obtained. Therefore, the material handling cost is the same for all algorithms. For this reason, we only compared the results in terms of the computation efficiency. To illustrate the efficiency of our proposed elimination algorithm, we have compared the average CPU TIME for computing a Newton step for each problem with two mathematical solvers (CPLEX, NEOS). We then have compared the average CPU TIME for finding an optimal arrangement under relative positioning constraints for each problem. The parameter settings and the computer specs used to code and run are summarized in Table 2.

Experimental Results
The results of the comparison of the average CPU TIME for computing a Newton step for each problem with or without eliminations are summarized in Table 3. The results of the comparison of the average CPU TIME for computing an optimal arrangement under relative-constraints are summarized in Table 4.
These examples show that, by exploiting the special structure of the problem, the proposed method reduces the CPU TIME to compute Newton Steps. In our implementation, it corresponds to around [0.1-0.2] µ sec. In particular, the proposed method is much faster for large sized problems. For a FLPRC with N = 35, the proposed method solves the Newton equations in 0.2 µ sec, approximately 70 times faster than the generic solvers. The sparsity pattern of the coefficient matrix of the Newton equation for SC35 is shown in Figure 2. In this example, the dimension of the coefficient matrix is n v + n λ = 1330 + 3955 = 5285 based on the (36) equation. However, due to the removal of some constraints from redundancy, the size of the matrix is reduced to n v + n λ = 1330 + 3360 = 4690. We see that the lower right block is a diagonal matrix. Therefore, by using this structure, the main computational effort is reduced to the effort of solving Equation (38) whose coefficient matrix size is n v = 1330. Therefore, it is possible to find the solution much more efficiently than solving the original Equation (36).
The sparsity pattern of the coefficient matrix of Equation (38) is shown in Figure 3. As is the same with Figure 2, the lower right block is diagonal, so using this structure can be more efficient than solving Equation (38).
Furthermore, the sparsity pattern of the coefficient matrix of the Equation (42) is as shown in Figure 4. This matrix is a dense matrix only for m = n = 1, · · · , N component and m = n = N + 1, · · · , 2N component, and is a sparse matrix for the others. Therefore, the main computational load is consistent with solving this dense matrix. Table 5 shows the calculation effort based on the calculation shown in Table 1. The required flops to compute the Newton steps are reduced a lot by the proposed block eliminations. This reduction leads to the faster CPU Time for computing Newton steps as shown in Table 3, and then leads to the faster CPU Time for finding an optimal solution of the FLPRC as shown in Table 4.
These results indicate that the proposed method reduction solves the problem faster, by exploiting the special structure of the matrix. Since the dimension of the final dense matrix matches the number of departments, no further decomposition is possible. Therefore, there is little room for further improvement in this problem.

Discussion
The method proposed in this study is the optimization of department position and shape given a relative position H ij , V ij . This technique can be used as a subroutine for a solution method using MIP. This approach also includes Liu and Meller [12] and Bozer and Wang [42], which combines MIP and MH. Among these algorithms, FLPRC is solved many times. Therefore, by using the proposed method, the calculation time can be shortened and more time can be devoted to the search of the relative positions.
The proposed method can also be used as a subroutine of the layout technique using FBS, such as [30] because FBS is used to derive the starting point in the proposed method. FBS has an encoding that divides the building into bays. Although this approach is simple, it does not necessarily guarantee optimality due to the layout that cannot be expressed by encoding. On the other hand, in the process of searching, if the proposed method is applied based on the relative position obtained by FBS, the layout of FBS can be improved, and there is a possibility that a better layout can be searched. In recent years, many MH approaches have used FBS and have achieved very good results. Further improvement can be considered by applying this technique.

Conclusions
We have considered the problem of computing an optimal arrangement of department within a facility, as measured by the minimum material handling cost, subject to the constraints for departments with respect to within-boundary, aspect-ratio, non-overlapping, and relative positioning. We have developed a primal-dual interior-point method that exploits the special structure of a problem, and is much faster than a standard method. We have proposed an efficient computation of the Newton step, using three types of block eliminations. To illustrate the effectiveness of our algorithm, we have given numerical results, using problem instances from several benchmark problems for FLP with specified relative-positioning given from randomized based rule. For a typical problem with 35 departments, the proposed algorithm is roughly 10 times faster. Using the proposed algorithm, we can accelerate a MIP (Mixed-Integer Programming) based algorithm for solving FLP, which is one of the most efficient methods for a continuous representation based problem.
Future research includes applying the proposed method to the MIP-based approach and MH-based approach. When applied to the MIP-based approach, the relaxed problem in the branch-and-bound method can be solved faster, which may solve a larger problem. The existing MIP-based approach can only solve problems for up to 12 departments. When applied to the MH-based approach, the proposed technique can be applied using the relative positional relationship of the layout obtained by FLEX-BAY or LOGIC. By doing so, the better layout can be obtained at each iteration, and thus there is possibility to get a better solution.