Next Article in Journal
Prospects of Improving the Vibroacoustic Method for Locating Buried Non-Metallic Pipelines
Previous Article in Journal
Optimizing Hybrid Renewable Energy Systems for Isolated Applications: A Modified Smell Agent Approach
Previous Article in Special Issue
Review: Axial Motion of Material in Rotary Kilns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Dedicated and Randomized Storage Policies in Warehouse Efficiency Optimization

by
Rana M. Saleh
1,2 and
Tamer F. Abdelmaguid
2,*
1
Mechanical Engineering Department, Faculty of Engineering, Future University in Egypt, Cairo 11835, Egypt
2
Mechanical Design and Production Department, Faculty of Engineering, Cairo University, Giza 12613, Egypt
*
Author to whom correspondence should be addressed.
Eng 2025, 6(6), 119; https://doi.org/10.3390/eng6060119
Submission received: 26 April 2025 / Revised: 20 May 2025 / Accepted: 27 May 2025 / Published: 1 June 2025
(This article belongs to the Special Issue Women in Engineering)

Abstract

:
This paper examines the impact of two storage policies—dedicated storage (D-SLAP) and randomized storage (R-SLAP)—on warehouse operational efficiency. It integrates the Storage Location Assignment Problem (SLAP) with the unrelated parallel machine scheduling problem (UPMSP), which represents the scheduling of the material handling equipment (MHE). This integration is intended to elucidate the interplay between storage strategies and scheduling performance. The considered evaluation metrics include transportation cost, average waiting time, and total tardiness, while accounting for product arrival and demand schedules, precedence constraints, and transportation expenses. Additionally, considerations such as MHE eligibility, resource requirements, and available storage locations are incorporated into the analysis. Given the complexity of the combined problem, a tailored Non-dominated Sorting Genetic Algorithm (NSGA-II) was developed to assess the performance of the two storage policies across various randomly generated test instances of differing sizes. Parameter tuning for the NSGA-II was conducted using the Taguchi method to identify optimal settings. Experimental and statistical analyses reveal that, for small-size instances, both policies exhibit comparable performance in terms of transportation cost and total tardiness, with R-SLAP demonstrating superior performance in reducing average waiting time. Conversely, results from large-size instances indicate that D-SLAP surpasses R-SLAP in optimizing waiting time and tardiness objectives, while R-SLAP achieves lower transportation cost.

1. Introduction

Warehouses play a pivotal role in supply chain management by ensuring efficient distribution and balancing supply–demand dynamics. Optimizing storage allocation reduces storage durations, minimizes travel distances, and alleviates product flow bottlenecks, thereby enhancing overall operational efficiency [1]. Furthermore, effective storage coordination streamlines warehouse operations, particularly in order picking tasks. Integrating the processes of storing incoming items and scheduling the material handling equipment (MHE) that are responsible for transporting them to their designated locations is crucial for maximizing resource utilization. Suboptimal location assignments can lead to increased travel distances and congestion, ultimately degrading the MHE scheduling efficiency.
Efficient storage location assignment is critical for warehouse optimization. Recent advancements employ innovative computational techniques to enhance these assignments. Waubert de Puiseau et al. (2022) [2] applied deep reinforcement learning (DRL) to optimize storage locations, demonstrating significant reductions in transportation costs compared to manual methods. However, storage location planning should not be treated in isolation, as it directly impacts order picking efficiency and task scheduling. Bolaños-Zuñiga et al. (2023) [3] developed integrated models that simultaneously optimize storage allocation and picking routes, incorporating product weight to improve operational speed and accuracy. Cai et al. (2021) [4] demonstrated that combining storage planning with robotic path optimization in automated warehouses reduces both time and energy consumption.
A warehouse storage location assignment policy (SLAP) enhances warehouse efficiency by aligning storage capacity with demand, addressing the critical warehousing problem of optimal product allocation. Hausman et al. (1976) [5] pioneered SLAP for automated warehouses, with policies typically classified as dedicated (D-SLAP), class-based, or randomized (R-SLAP) [6]. This study focuses on both dedicated and randomized policies. Traditionally, warehouses relied on D-SLAP, where each product is assigned a fixed storage location. While straightforward, this approach has notable drawbacks, including excessive space requirements to accommodate peak inventory levels for all products [7]. Manzini et al. (2006) [8] showed that D-SLAP in large-scale warehouses leads to significant underutilization of storage space.
In contrast, R-SLAP has gained prominence with advancements in warehouse management systems (WMSs). Bartholdi and Hackman (2008) [9] showed that real-time WMS tracking enables efficient item retrieval, improving space utilization and operational flexibility. Quintanilla et al. (2015) [10] proposed a metaheuristic-based model for R-SLAP optimization, incorporating construction methods and local search algorithms to evaluate relocation strategies. As warehouse complexity increased, advanced optimization techniques emerged. Larco et al. (2017) [11] introduced a mixed-integer linear programming (MILP) framework to minimize order preparation time and worker discomfort, integrating production planning with R-SLAP. Similarly, Tang and Li (2009) [12] applied ant colony optimization (ACO) to optimize product placement, reducing retrieval times and enhancing space efficiency. Zhang et al. (2021) [13] further advanced this field by combining internet of things (IoT)-enabled tracking with randomized storage assignment, improving cost efficiency and space utilization.
The scheduling of MHE operations in a warehouse has the same structure of the parallel machine scheduling problem (PMSP). PMSP involves assigning products to several MHE, such as AGVs or forklifts. The models for the PMSP vary based on the characteristics of the machines involved. In the UPMSP, machines have different speeds and abilities to handle specific tasks [14]. This situation is common in real-world warehouses, where different types of MHE are used for various items. This requires a scheduling approach that considers each task’s requirements, the machines’ capabilities, and their overall performance.
Recent research integrates scheduling and storage allocation assignment to optimize both simultaneously [15]. This approach accounts for interdependencies between scheduling decisions and storage allocations, improving overall efficiency. Applications extend beyond warehousing; for instance, Tang et al. (2016) [16] developed an MILP model for bulk cargo ports, optimizing space allocation and ship scheduling. Fatemi-Anaraki et al. (2021) [17] proposed a mathematical model integrating berth allocation and vessel scheduling in constrained waterways. Chen et al. (2022) [18] introduced an MILP framework combining vehicle routing problems (VRPs) with zone picking, enhancing economic and service performance.
Warehouse operations often involve conflicting objectives, such as minimizing travel time while meeting delivery deadlines. To address this, researchers employed multi-objective optimization approaches. Zhang et al. (2023) [19] optimized storage layouts using a specialized algorithm, balancing picking speed and shelf stability for sustainable operations. Leon et al. (2023) [20] combined simulation and optimization to model real-world conditions, improving storage assignments while considering order picking and routing. Antunes et al. (2022) [21] compared different optimization algorithms and concluded that the choice of a solution approach depends on the problem characteristics. They motivated the choice of NSGA-II for this study.
NSGA-II has proven effective in complex multi-objective engineering optimization problems. Gao et al. (2024) [22] employed NSGA-II to optimize feeder bus route planning, addressing passenger flow, travel time, and cost within a three-dimensional space that included timetable coordination. Their use of NSGA-II demonstrated the algorithm’s capacity to achieve uniform Pareto fronts and adapt to real-time constraints in urban transit systems. Ma et al. (2023) [23] proposed an improved NSGA-II for maritime search and rescue operations under severe weather conditions, emphasizing the algorithm’s ability to balance exploration and exploitation through enhanced population diversity and multi-task knowledge transfer. Niu et al. (2023) [24] utilized NSGA-II for hydrogen production optimization, demonstrating its robustness under fluctuating power inputs. These recent studies underscore NSGA-II’s adaptability, supporting its adoption in this study for optimizing warehouse operations.
Despite extensive research on warehouse optimization, few studies address the joint optimization of storage location assignment (SLAP) and MHE scheduling decisions. This work bridges this critical gap by conducting a comparative analysis of two fundamental storage policies—Dedicated (D-SLAP) and Randomized (R-SLAP)—while simultaneously optimizing transportation costs, waiting times, and order lateness in warehouse operations.
This research is motivated by the growing operational complexity of modern warehouses, where co-optimization of storage allocation and MHE scheduling is essential for maximizing efficiency. By integrating SLAP with UPMSP, we present a multi-objective framework to evaluate how these policies differentially impact warehouse performance metrics, including cost efficiency and customer satisfaction.
Both SLAP and UPMSP are NP-hard combinatorial problems, rendering exact optimization methods computationally intractable. To address this, we leverage NSGA-II, a metaheuristic tailored for multi-objective optimization under conflicting criteria. NSGA-II is selected due to its:
  • Proven efficacy in Pareto front exploration, ensuring balanced trade-offs between objectives (e.g., minimizing transportation costs vs. tardiness).
  • Adaptability to problem-specific constraints, achieved through a customized three-string chromosome encoding that jointly optimizes scheduling and storage decisions.
  • Established robustness in logistics literature.
The proposed framework is validated via rigorous computational benchmarking and statistical testing, comparing D-SLAP and R-SLAP across scalable problem instances. Accordingly, the key contributions of this study are as follows:
  • Novel Integration of SLAP and UPMSP: A unified optimization model incorporating real-world constraints (precedence relationships, machine eligibility, limited resources, and dynamic product flows).
  • Algorithmic Innovation: A modified NSGA-II with a three-string chromosome representation, enabling simultaneous optimization of storage allocation and MHE scheduling.
  • Policy-Centric Benchmarking: A data-driven evaluation of D-SLAP and R-SLAP using multi-objective performance metrics (hypervolume, spacing) and parametric and nonparametric statistical tests, providing actionable insights for warehouse design.
The paper is structured as follows: Section 2 formulates the problem and details the proposed NSGA-II implementation, including performance metrics and parameter tuning. Section 3 presents experimental results across generated test instances. Section 4 provides statistical analysis of the findings. Section 5 concludes with key insights and future research directions.

2. Materials and Methods

This research proposes a comparative study of the performance of D-SLAP and R-SLP while utilizing an integrated strategy for scheduling MHE and storing goods in warehouses.

2.1. Problem Definition

The products are transported from the input/output (I/O) station to the assigned storage locations in the warehouse. Both dedicated and randomized storage policy is applied for SLAP. As shown in Figure 1, the warehouse layout is designed with wide aisles to ensure safe and efficient operation. To maximize storage capacity, vertical space is utilized with two storage levels available at each location. The I/O station, located in the center of the warehouse, serves both incoming and outgoing items. Each item represents a category of products, referred to as jobs. The warehouse is arranged into numbered locations for storage, with each location capable of holding a single pallet at a time. The total number of these locations is known in advance.
Various MHE, such as forklifts, trucks, and automated guided vehicles (AGVs), transport goods. All MHE are available at the start of the scheduling process and can access any storage location. However, they can only handle one job when moving a product to a storage location. No preemption or interruption is allowed during job processing. Products can only be handled when the needed and eligible machine and other required resources are available. Every product is moved and stored as pallets. These pallets vary in size, weight, and geometric configuration, hence the eligibility of specific machines to handle certain jobs. Due to the stacking of pallets above, beside, and in front of one another, precedence constraints exist between some jobs. In addition, a job can only be moved to a storage location if that location is unoccupied. This ensures that no overlap occurs with jobs stored previously in the exact location. Transportation times between the I/O station and storage locations depend on the machine and the location. On the other hand, processing times, which represent loading and unloading, are job- and MHE-dependent.
The key objective is to determine the optimal timing for moving products using the appropriate MHE and assigning storage locations, while minimizing transportation costs, waiting times, and lateness. Based on both D-SLAP and R-SLAP, three objectives need to be considered. The first objective ( O F 1 ) corresponds to the cost of transporting products using the appropriate MHE for the designated storage locations. The second objective ( O F 2 ) accounts for the waiting time of products at the I/O station before transferring them to their designated storage locations. The third objective ( O F 3 ) is the total tardiness in meeting the required due dates.
This research proposes a novel integrated optimization framework for simultaneous MHE scheduling and storage location assignment. Given the NP-hard complexity inherent to these combinatorial problems [25], traditional exact solution methods become computationally intractable for practical-scale instances. To overcome this computational limitation, a customized NSGA-II implementation is proposed in this study, specifically designed for this integrated optimization challenge. The metaheuristic approach provides an efficient Pareto solution frontier while maintaining computational feasibility.

2.2. Non-Dominated Sorting Genetic Algorithm (NSGA-II)

NSGA-II is a widely adopted metaheuristic for solving multi-objective optimization problems [26,27]. Developed by Deb et al. (2002) [28], NSGA-II employs a genetic algorithm framework to identify Pareto-optimal solutions through mechanisms such as selection, crossover, mutation, and Pareto-based ranking.
The algorithm begins by generating an initial population of feasible solutions randomly. This population evolves over iterations by applying different genetic operations such as selecting parents, crossover, mutation, and ranking solutions into non-dominated fronts. In the proposed technique, each solution (chromosome) is encoded using three distinct strings: (1) a job sequence, (2) a machine assignment, and (3) a location assignment. The length of each string corresponds to the number of jobs (N) to be scheduled. To ensure compliance with precedence constraints, a corrective algorithm proposed by Afzalirad et al. (2016) [14] is used to adjust the job order and make it feasible. The machine assignment string is generated by randomly assigning an eligible machine to each job. The location assignment string varies depending on the storage policy.
  • Under D-SLAP, each job is assigned a dedicated storage location, ensuring that items do not share storage spaces.
  • Under R-SLAP, jobs are allocated to any available storage location at random.
An expanded double-point crossover operator is employed to explore neighborhood solutions. Using tournament selection, two parent chromosomes (Pr1 and Pr2) are selected. Two crossover points are randomly selected from a discrete uniform distribution, and gene segments between these points are exchanged to produce offspring (Ch1 and Ch2). The remaining positions in each child are filled with unassigned genes from the respective parent while preserving their original order.
To ensure feasibility, a corrective mechanism is applied to the job sequence strings. For the machine assignment strings, a random number (r ∈ [0,1]) is generated for each gene. If r > 0.5, the machine assignment is inherited from the opposite parent; otherwise, it is retained from the same parent. The location assignment follows a similar approach: if r > 0.5, the location is inherited from the parent; otherwise, a random feasible location is assigned. Post-crossover, a repair mechanism may be applied to maintain the feasibility of the generated solution.
A mutation operator enhances population diversity. A chromosome is randomly selected, and two genes in its job sequence string are swapped. If the resulting sequence violates precedence constraints, the correction mechanism is applied. For the machine assignment string, unchanged genes retain their parent assignments, while swapped genes undergo reassignment: if a number is randomly generated (r < 0.5), the original machine is retained; otherwise, a new eligible machine is selected. The location assignment string is generated based on the storage policy. For D-SLAP, each job is assigned a random dedicated storage location, while for R-SLAP, each job is assigned a random available storage location.
NSGA-II employs non-dominated sorting to classify solutions into Pareto fronts based on dominance relationships. Solutions within each front are further ranked using crowding distance, a measure of solution density in the objective space, to promote diversity. The algorithm terminates after a predefined number of iterations, yielding a set of non-dominated solutions.

2.3. Performance Metrics

Several metrics are used to evaluate the performance of multi-objective optimization algorithms [29]. In this study, the used metrics are the number of solutions in the Pareto front (NPS), the Mean Ideal Distance (MID), a metric for diversification (DM), and a metric that evaluates the solutions’ dispersion (SNS). Finally, a Quality Metric (QM) that compares the dominance of solutions and calculates the percentage of solutions that belong to each storage policy is used. Equations (1) to (3) explain how these metrics are evaluated. In these equations, the index i refers to a Pareto front solution, while O F j is the jth objective function value.
D M = j = 1 3 ( max i OF j , i min i O F j , i ) 2
M I D = i = 1 N P S c i N P S
where c i = j = 1 3 O F j , i 2 .
S N S = i = 1 N P S ( M I D c i ) N P S 1

2.4. Parameter Tuning

Metaheuristic algorithms require careful parameter selection to ensure optimal performance. In this study, the Taguchi method is employed to determine the best parameter combinations for the NSGA-II algorithm. This approach efficiently evaluates multiple decision variables with fewer experiments by leveraging orthogonal arrays (OAs) instead of full factorial designs [30].
For the proposed NSGA-II implementation, the following parameters are optimized: maximum number of iterations, population size, crossover rate, and mutation rate. An L9 OA is utilized to test each parameter at three distinct levels, as outlined in Table 1. These levels are selected to accommodate both small and large test instances under D-SLAP and R-SLAP scenarios.
The NSGA-II algorithm is executed across all parameter combinations of the L9 OA, with each configuration evaluated over 10 replications. To assess performance, two key metrics are employed: Relative Percentage Deviation (RPD) and the signal-to-noise (S/N) ratio. The RPD, calculated using Equation (4), normalizes the acquired results. Here, the solution is the value of the performance measure acquired by each test instance, and Best is the best value of the performance measure obtained over all replications of this instance. The RPD provides a measure of the algorithms’ relative performance.
R P D = | P o l i c y s o l B e s t s o l | B e s t s o l × 100
The Taguchi method only deals with one response variable. Therefore, the weighted mean of the performance measures (WMPM) is identified as in Equation (5).
W M P M = N P S + D M + 2 M I D + S N S + 2 Q M 7
In addition, the average value for 10 replications is calculated. The signal-to-noise (S/N) ratio is used to reduce the variation in the response variable. According to the Taguchi method, the S/N ratio for minimizing objectives is calculated using Equation (6).
S / N = 10 × l o g 10 ( o b j e c t i v e   f u n c t i o n ) 2
The best parameter combinations for small- and large-sized instances are summarized in Table 2. For each parameter of the NSGA-II algorithm, its best value is indicated as it provides a minimum value of WMPM and a maximum value of the S/N ratio. Figure 2 and Figure 3 display the average WMPM and signal-to-noise (S/N) ratio obtained for the NSGA-II algorithm at the different levels of the studied parameters, respectively. Accordingly, Table 2 summarizes the best parameter combinations for the problem using D-SLAP and R-SLAP based on these figures.

3. Results

The computational experiments were conducted on a workstation equipped with an 11th-generation Intel® CoreTM i9-11900K processor and 32 GB of RAM. The proposed NSGA-II was implemented in MATLAB R2020a and evaluated on 24 test instances, evenly split between 12 small-size and 12 large-size cases. Each instance was solved under both D-SLAP and R-SLAP policies to compare their performance.

3.1. Experimental Design

To compare the performance of the D-SLAP and R-SLAP policies under the proposed efficient multi-objective optimization NSGA-II metaheuristic for the integration of UPMSP and SLAP, the 24 test instances were designed by varying five key problem parameters: the number of items, machines, jobs, storage locations, and resource availability. Temporal parameters were generated using discrete uniform distributions. For small-size instances, processing times were sampled from U(1,10), while large-size instances used U(1,20). Setup times followed U(1,4) minutes for small instances and U(3,6) minutes for large instances. Release dates were assigned within U(1,30) for small instances and U(1,50) for large instances. The due date D j for each job j was calculated using Equation (7), where R j represents its release time, U(2,5) × 10 generates a random integer multiple of 10 between 20 and 50, and m a x   ( P j m ) denotes the maximum processing time of job j across all MHE.
D j = R j + U 2 , 5 × 10 + m a x   ( P j m )
The problem constraints were implemented through two binary matrices. A precedence matrix encoded job sequencing dependencies, while an MHE eligibility matrix ensured that each job could be processed by at least half of the available MHE (rounded down), following the methodology established in [14].
Cost parameters were uniformly distributed, with transportation and processing costs C j ranging between 1 and 10 for all instances. All jobs associated with the same item were assigned identical transportation costs and processing times when executed by the same MHE. MHE speeds differed by instance size: small instances used speeds between 5 and 8 km/h (U(5,8)), while large instances employed faster MHE at 8–10 km/h (U(8,10)). Transportation times were computed based on rectilinear (Manhattan) distances between the I/O station at (0,0,0) and storage locations at (x,y,z), where z remained fixed per rack level. The distance calculation between coordinates ( x 1 , y 1 ) and ( x 2 , y 2 ) followed x 2 x 1 + y 2 y 1 , as established in [31] for grid-based warehouse environments. This approach provides an accurate representation of real-world travel paths in warehouse settings.

3.2. NSGA-II Performance Under D-SLAP and R-SLAP

Table 3 presents a comparative analysis of the algorithm’s performance under D-SLAP and R-SLAP for small-size test instances, evaluated across the five key performance metrics. Both algorithms consistently achieved 100 solutions in the Pareto front, indicating comparable solution quality and optimality. However, D-SLAP exhibited a more uniform distribution of solutions along the Pareto front compared to R-SLAP, suggesting better diversity in its solution set. Conversely, R-SLAP generated solutions with smaller metric values, implying closer proximity to the ideal point and thus higher solution quality in certain aspects.
Table 4 presents a comparative evaluation of D-SLAP and R-SLAP for large-size instances. The results indicate that R-SLAP achieves superior performance in terms of SNS, MID, and QM, demonstrating its effectiveness in identifying efficient Pareto front solutions across all configurations. Furthermore, R-SLAP generates solutions that are consistently closer to the ideal point. In contrast, D-SLAP exhibits better performance in SNS and DM, indicating a more uniform distribution of solutions along the Pareto front.
Figure 4 illustrates a comparative analysis of the performance of NSGA-II under D-SLAP and R-SLAP policies across three objectives for the fifth replication of the second large-sized test instance. The solutions obtained by D-SLAP are widely dispersed across the solution space, reflecting the differences between the ARPD values of D-DLAP and R-SLAP are calculated for each instance greater diversity and a balanced trade-off among objectives. Conversely, R-SLAP’s solutions are concentrated within a narrower region, resulting in reduced solution variety compared to D-SLAP.

3.3. D-SLAP Versus R-SLAP Comparative Results

The comparison of D-SLAP versus R-SLAP is conducted across the three objective functions, with the results averaged over ten replications. For small test instances (Table 5), both policies perform comparably in the total tardiness, while R-SLAP outperforms D-SLAP in the average waiting time for most cases; the transportation cost shows marginal differences, with each policy excelling in select instances. This suggests R-SLAP’s superiority for the average waiting time and highlights the flexibility of randomized storage in small systems due to unconstrained slot allocation. For large instances (Table 6), R-SLAP dominates the transportation cost objective, achieving superior results in 11 out of 12 cases, whereas D-SLAP excels in average waiting time and total tardiness for most instances, despite R-SLAP’s competitiveness in sporadic cases. Collectively, these findings demonstrate that R-SLAP is preferable for the transportation cost regardless of system scale, while D-SLAP proves more effective for the other objectives in large-scale configurations.
To compare the performance of the two SLAP policies with respect to each individual objective, the Relative Percentage Deviation (RPD) is evaluated. The RPD is calculated for each instance based on the best solution obtained among the two storage policies, as provided in Equation (8).
R P D = | B e s t F r o n t B e s t s o l | B e s t s o l × 100
To provide a visual comparison of the two policies using the RPD, interval plots at 95% confidence level are drawn for each objective individually. For small-size instances (Figure 5, Figure 6 and Figure 7), the interval plots reveal distinct performance patterns across the three objectives. For the transportation cost objective (Figure 5), D-SLAP demonstrates a mean RPD value of 8.5 compared to R-SLAP’s 10.0, with partial confidence interval (CI) overlap suggesting potential statistical equivalence, though D-SLAP’s marginally lower RPD may indicate slight superiority. For the average waiting time, Figure 6 shows a pronounced performance difference, with D-SLAP’s mean RPD of 40.0 substantially exceeding R-SLAP’s 5.0; the complete confidence interval separation confirms statistical significance, establishing R-SLAP’s clear advantage. For total tardiness (Figure 7), D-SLAP’s mean RPD (75.0) is less than half of R-SLAP’s (175.0), yet overlapping CIs suggest that this difference may not be statistically significant. The aggregate analysis indicates comparable performance between policies for the transportation cost and the total tardiness, but R-SLAP’s decisive superiority in the average waiting time.
The large instance analysis reveals statistically significant performance divergences across all objectives. Regarding the transportation cost (Figure 8), R-SLAP achieves a markedly lower mean RPD (5.5 versus D-SLAP’s 20.0), with non-overlapping CIs con-firming its statistical superiority. This trend reverses for the average waiting time (Figure 9), where D-SLAP’s mean RPD (7.5) significantly outperforms R-SLAP’s 12.5, as evidenced by distinct CIs. The pattern continues in the total tardiness (Figure 10), with D-SLAP maintaining superiority (mean RPD 12.5 vs. R-SLAP’s 18.5) and non-overlapping CIs validating the statistical significance of this advantage. These results demonstrate a consistent performance inversion between instance sizes: while R-SLAP dominates the transportation cost regardless of scale, D-SLAP shows increasing superiority for the other two objectives as instance size grows.

3.4. Demonstrative Example Using Data of Large Instance 1

As a demonstrative example, this section provides the detailed values of the parameters for the first large-size test instance, as given in Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13. This problem consists of three items, 20 jobs, four MHE, 10 storage locations, and two available resources. The resultant best objective values for the 10 replications of running NSGA-II under both SLAP policies are provided in Table 14. As can be concluded, on average, D-SLAP outperforms R-SLAP in terms of this specific instance.

4. Discussion

The performance of D-SLAP and R-SLAP was evaluated across three key objectives, with central tendency assessed using both the mean and median. The mean, as the arithmetic average, is ideal for symmetrically distributed data without outliers but is sensitive to extreme values. In contrast, the median, representing the middle value of an ordered dataset, is robust against outliers and preferable for skewed distributions. Under normality, the mean and median converge, whereas in skewed or outlier-laden data, the median provides a more reliable measure of central tendency. The selection between these metrics depends on the underlying data distribution.
To compare policy performance, interval plots with 95% confidence intervals (CIs) were employed. However, since interval plots rely on aggregated data, they may obscure underlying variability. Therefore, formal hypothesis testing was conducted to rigorously assess the statistical significance of observed differences, accounting for paired data structures and ensuring population-level generalizability. Prior to test selection, normality testing (Shapiro–Wilk test) was performed, where a p-value < 0.05 indicated deviation from normality, necessitating non-parametric alternatives such as the Wilcoxon Signed Rank Test. Normally distributed data justified parametric tests, including the paired t-test.
The RPD was considered for both policies for each of the 24 small and large-size test instances. The best objective value was calculated from each replication’s Pareto front. The differences between the ARPD values of D-DLAP and R-SLAP were calculated for each instance. This resulted in 12 paired differences per objective for each problem size. The Shapiro–Wilk test’s normality tests show different results for the three objectives. For small-sized test instances, the transportation cost shows normality (p-value > 0.100, RJ = 0.956); therefore, a paired t-test is appropriate. The average waiting time has non-normal data (p-value < 0.010, RJ = 0.833), suggesting using a Wilcoxon test. The total tardiness also has non-normal data (p-value < 0.010, RJ = 0.807), indicating the need for a non-parametric test due to non-normality and high variability. For large-size test instances, all three objective functions show approximately normal differences (p-values > 0.100, RJ values of 0.971, 0.979, and 0.965, respectively). This supports the use of a paired t-test for all three objectives. Overall, the differences for all objectives are approximately normally distributed, as indicated by p-values greater than 0.05 and high RJ values, making parametric tests such as the paired t-test suitable for further analysis.
For the transportation cost related to small-size test instances, the paired t-test shows that D-SLAP significantly outperforms R-SLAP, with a mean ARPD difference of −18.81 for H0: μdifference = 0 and H1: μdifference < 0. For the average waiting time, the Wilcoxon Signed Rank Test shows a significant difference with a p-value of 0.021 and a median difference of 10.8521 for H0: μdifference = 0 and H1: μdifference > 0. This means D-SLAP performs worse than R-SLAP for this objective. For the total tardiness, the Wilcoxon Signed Rank Test shows a p-value of 0.944, greater than 0.05 for H0: μdifference < 0 and H1: μdifference ≠ 0. This means there is no significant difference between D-SLAP and R-SLAP, so they perform similarly for this objective.
Regarding the large-size test instances, the paired t-test is applied according to the normality test results of each objective. For the transportation cost, the paired t-test shows that D-SLAP significantly outperforms R-SLAP, with a mean ARPD difference of 14.17 for H0: μdifference = 0 and H1: μdifference > 0. This supports the results of the interval plot. For average waiting time, the paired t-test shows a significant difference with a p-value of 0.003 and a mean ARPD difference of −4.71 for H0: μdifference = 0 and H1: μdifference < 0. This means R-SLAP performs better than D-SLAP for this objective. For the total tardiness, the paired t-test shows a p-value of 0.010, less than 0.05 for H0: μdifference = 0 and H1: μdifference < 0. This means R-SLAP performs better than D-SLAP for this objective. It is worth noting that the choice of the alternative hypotheses is based on the results of the interval plots for the small- and large-sized test instances.

5. Conclusions

This study conducted a comparative analysis of two storage policies, D-SLAP and R-SLAP, to evaluate their impact on warehouse operations, integrating storage allocation with scheduling optimization. A customized NSGA-II metaheuristic was developed, incorporating three solution strings, job sequence, machine assignment, and location assignment, with policy-specific allocation logic. The model addressed multi-objective optimization, minimizing transportation costs, I/O station waiting times, and tardiness, while adhering to precedence, eligibility, and resource constraints. The Taguchi method was employed to optimize metaheuristic parameters for robustness.
This study provides valuable insights for warehouse managers seeking to optimize their operations. Key findings revealed that while both policies generated a similar number of optimal solutions, D-SLAP produced more uniformly distributed Pareto-optimal solutions, advantageous for scenarios requiring diverse alternatives. In contrast, R-SLAP yielded solutions closer to the ideal point, excelling in specific objectives. For small instances, D-SLAP outperformed R-SLAP in minimizing transportation costs. Both policies provide the same performance in terms of the total tardiness, whereas R-SLAP was superior in reducing waiting times. In large instances, R-SLAP dominated in transportation cost minimization, while D-SLAP excelled in waiting time and tardiness reduction. Statistical validation through parametric and non-parametric tests confirmed these performance differences, reinforcing the methodological rigor of the analysis.
From a practical standpoint, warehouse managers should select policies based on operational priorities: D-SLAP for balanced solution diversity and R-SLAP for targeted objective optimization. This research underscores the critical interplay between storage policies and scheduling, offering actionable insights for warehouse efficiency.
Future work should explore hybrid policies to leverage the strengths of both approaches. Additionally, incorporating dynamic supply–demand variability would enhance real-world applicability, while extending the model to include energy-efficient warehousing could address sustainability objectives.

Author Contributions

Conceptualization, T.F.A. and R.M.S.; methodology, R.M.S.; software, R.M.S.; validation, R.M.S. and T.F.A.; formal analysis, R.M.S.; investigation, R.M.S.; resources, R.M.S.; writing—original draft preparation, R.M.S.; writing—review and editing, T.F.A.; supervision, T.F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data for test instances are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARPDAverage Relative Percentage Deviation
AvgAverage value
CIsConfidence intervals
D-SLAPDedicated Storage Location Assignment Problem
DMDiversification Metric
MIDMean Ideal Distance
NPSNumber of solutions in the Pareto front
NSGA-IINon-dominated Sorting Genetic Algorithm
PMSPParallel machine scheduling problem
QMQuality Metric
R-SLAPRandomized Storage Location Assignment Problem
RPDRelative Percentage Deviation
S/NSignal-to-noise ratio
SLAPStorage Location Assignment Problem
SNSSolutions’ dispersion
UPMSPUnrelated parallel machine scheduling
WMPMWeighted mean of the performance measures

References

  1. Wisittipanich, W.; Kasemset, C. Metaheuristics for Warehouse Storage Location Assignment Problems. Chiang Mai Univ. J. Nat. Sci. 2015, 14, 361–377. [Google Scholar] [CrossRef]
  2. Waubert de Puiseau, C.; Nanfack, D.T.; Tercan, H.; Löbbert-Plattfaut, J.; Meisen, T. Dynamic Storage Location Assignment in Warehouses Using Deep Reinforcement Learning. Technologies 2022, 10, 129. [Google Scholar] [CrossRef]
  3. Bolaños-Zuñiga, J.; Salazar-Aguilar, M.A.; Saucedo-Martínez, J.A. Solving Location Assignment and Order Picker-Routing Problems in Warehouse Management. Axioms 2023, 12, 711. [Google Scholar] [CrossRef]
  4. Cai, J.; Li, X.; Liang, Y.; Ouyang, S. Collaborative Optimization of Storage Location Assignment and Path Planning in Robotic Mobile Fulfillment Systems. Sustainability 2021, 13, 5644. [Google Scholar] [CrossRef]
  5. Hausman, W.H.; Schwarz, L.B.; Graves, S.C. Optimal Storage Assignment in Automatic Warehousing Systems. Manag. Sci. 1976, 22, 629–638. [Google Scholar] [CrossRef]
  6. Gu, J.; Goetschalckx, M.; McGinnis, L.F. Research on Warehouse Design and Performance Evaluation: A Comprehensive Review. Eur. J. Oper. Res. 2010, 203, 539–549. [Google Scholar] [CrossRef]
  7. Goetschalckx, M.; Ratliff, D.H. Shared Storage Policies Based on the Duration Stay of Unit Loads. Manag. Sci. 1990, 36, 1120–1132. [Google Scholar] [CrossRef]
  8. Manzini, R.; Gamberi, M.; Regattieri, A. Design and Control of an AS/RS. Int. J. Adv. Manuf. Technol. 2006, 28, 766–774. [Google Scholar] [CrossRef]
  9. Bartholdi, J.J.; Hackman, S.T. Allocating Space in a Forward Pick Area of a Distribution Center for Small Parts. IIE Trans. 2008, 40, 1046–1053. [Google Scholar] [CrossRef]
  10. Quintanilla, S.; Pérez, Á.; Ballestín, F.; Lino, P. Heuristic Algorithms for a Storage Location Assignment Problem in a Chaotic Warehouse. Eng. Optim. 2015, 47, 1405–1422. [Google Scholar] [CrossRef]
  11. Larco, J.A.; de Koster, R.; Roodbergen, K.J.; Dul, J. Managing Warehouse Efficiency and Worker Discomfort through Enhanced Storage Assignment Decisions. Int. J. Prod. Res. 2017, 55, 6407–6422. [Google Scholar] [CrossRef]
  12. Tang, H.-Y.; Li, M.-J. An Improved Ant Colony Algorithm for Order Picking Optimization Problem in Automated Warehouse. In Fuzzy Information and Engineering Volume 2—Advances in Intelligent and Soft Computing; Cao, B., Li, T.F., Zhang, C.Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 62. [Google Scholar] [CrossRef]
  13. Zhang, G.; Shang, X.; Alawneh, F.; Yang, Y.; Nishi, T. Integrated Production Planning and Warehouse Storage Assignment Problem: An IoT Assisted Case. Int. J. Prod. Econ. 2021, 234, 108058. [Google Scholar] [CrossRef]
  14. Afzalirad, M.; Rezaeian, J. Resource-Constrained Unrelated Parallel Machine Scheduling Problem with Sequence Dependent Setup Times, Precedence Constraints and Machine Eligibility Restrictions. Comput. Ind. Eng. 2016, 98, 40–52. [Google Scholar] [CrossRef]
  15. Heshmati, S.; Toffolo, T.A.M.; Vancroonenburg, W.; Vanden Berghe, G. Crane-Operated Warehouses: Integrating Location Assignment and Crane Scheduling. Comput. Ind. Eng. 2019, 129, 274–295. [Google Scholar] [CrossRef]
  16. Tang, L.; Sun, D.; Liu, J. Integrated Storage Space Allocation and Ship Scheduling Problem in Bulk Cargo Terminals. IIE Trans. 2016, 48, 428–439. [Google Scholar] [CrossRef]
  17. Fatemi-Anaraki, S.; Tavakkoli-Moghaddam, R.; Abdolhamidi, D.; Vahedi-Nouri, B. Simultaneous Waterway Scheduling, Berth Allocation, and Quay Crane Assignment: A Novel Matheuristic Approach. Int. J. Prod. Res. 2021, 59, 7576–7593. [Google Scholar] [CrossRef]
  18. Chen, W.; Zhang, Y.; Zhou, Y. Integrated Scheduling of Zone Picking and Vehicle Routing Problem with Time Windows in the Front Warehouse Mode. Comput. Ind. Eng. 2022, 163, 107823. [Google Scholar] [CrossRef]
  19. Zhang, X.; Mo, T.; Zhang, Y. Optimization of Storage Location Assignment for Non-Traditional Layout Warehouses Based on the Firework Algorithm. Sustainability 2023, 15, 10242. [Google Scholar] [CrossRef]
  20. Leon, J.F.; Li, Y.; Peyman, M.; Calvet, L.; Juan, A.A. A Discrete-Event Simheuristic for Solving a Realistic Storage Location Assignment Problem. Mathematics 2023, 11, 1577. [Google Scholar] [CrossRef]
  21. Antunes, A.R.; Matos, M.A.; Rocha, A.M.A.C.; Costa, L.A.; Varela, L.R. A Statistical Comparison of Metaheuristics for Unrelated Parallel Machine Scheduling Problems with Setup Times. Mathematics 2022, 10, 2431. [Google Scholar] [CrossRef]
  22. Gao, X.; Liu, S.; Jiang, S.; Yu, D.; Peng, Y.; Ma, X.; Lin, W. Optimizing the Three-Dimensional Multi-Objective of Feeder Bus Routes Considering the Timetable. Mathematics 2024, 12, 930. [Google Scholar] [CrossRef]
  23. Ma, Y.; Li, B.; Huang, W.; Fan, Q. An Improved NSGA-II Based on Multi-Task Optimization for Multi-UAV Maritime Search and Rescue under Severe Weather. J. Mar. Sci. Eng. 2023, 11, 781. [Google Scholar] [CrossRef]
  24. Niu, M.; Li, X.; Sun, C.; Xiu, X.; Wang, Y.; Hu, M.; Dong, H. Operation Optimization of Wind/Battery Storage/Alkaline Electrolyzer System Considering Dynamic Hydrogen Production Efficiency. Energies 2023, 16, 6132. [Google Scholar] [CrossRef]
  25. Kalra, M.; Singh, S. A Review of Metaheuristic Scheduling Techniques in Cloud Computing. Egypt. Inform. J. 2015, 16, 275–295. [Google Scholar] [CrossRef]
  26. Wang, W.; Liu, Y.; Fan, X.; Zhang, Z. Optimization of Charging Station Capacity Based on Energy Storage Scheduling and Bi-Level Planning Model. World Electr. Veh. J. 2024, 15, 327. [Google Scholar] [CrossRef]
  27. Morovati, R.; Kisi, O. Utilizing Hybrid Machine Learning Techniques and Gridded Precipitation Data for Advanced Discharge Simulation in Under-Monitored River Basins. Hydrology 2024, 11, 48. [Google Scholar] [CrossRef]
  28. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  29. Afzalirad, M.; Rezaeian, J. A Realistic Variant of Bi-Objective Unrelated Parallel Machine Scheduling Problem: NSGA-II and MOACO Approaches. Appl. Soft Comput. J. 2017, 50, 109–123. [Google Scholar] [CrossRef]
  30. Hidri, L.; Alqahtani, A.; Gazdar, A.; Youssef, B. Ben Green Scheduling of Identical Parallel Machines with Release Date, Delivery Time and No-Idle Machine Constraints. Sustainability 2021, 13, 9277. [Google Scholar] [CrossRef]
  31. Larson, R.C.; Sadiq, G. Facility Locations with the Manhattan Metric in the Presence of Barriers to Travel. Oper. Res. 1983, 31, 652–669. [Google Scholar] [CrossRef]
Figure 1. Warehouse layout.
Figure 1. Warehouse layout.
Eng 06 00119 g001
Figure 2. Plot of mean effects of WMPM for NSGA-II.
Figure 2. Plot of mean effects of WMPM for NSGA-II.
Eng 06 00119 g002
Figure 3. Plot of mean effects of S/N ratio for NSGA-II.
Figure 3. Plot of mean effects of S/N ratio for NSGA-II.
Eng 06 00119 g003
Figure 4. Plot of obtained Pareto fronts by D-SLAP and R-SLAP for large test instance 2, replication 5.
Figure 4. Plot of obtained Pareto fronts by D-SLAP and R-SLAP for large test instance 2, replication 5.
Eng 06 00119 g004
Figure 5. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for transportation cost using small-size test instances.
Figure 5. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for transportation cost using small-size test instances.
Eng 06 00119 g005
Figure 6. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for average waiting time using small-size test instances.
Figure 6. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for average waiting time using small-size test instances.
Eng 06 00119 g006
Figure 7. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for total tardiness using small-size test instances.
Figure 7. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for total tardiness using small-size test instances.
Eng 06 00119 g007
Figure 8. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for transportation cost using large-size test instances.
Figure 8. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for transportation cost using large-size test instances.
Eng 06 00119 g008
Figure 9. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for average waiting time using large-size test instances.
Figure 9. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for average waiting time using large-size test instances.
Eng 06 00119 g009
Figure 10. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for total tardiness using large-size test instances.
Figure 10. RPD mean plot and 95% confidence intervals for D-SLAP and R-SLAP for total tardiness using large-size test instances.
Eng 06 00119 g010
Table 1. Parameter tuning of NSGA-II.
Table 1. Parameter tuning of NSGA-II.
ParametersLevel
123
Maximum iterations200350500
Population size5075100
Crossover rate0.60.750.9
Mutation rate0.010.050.1
Table 2. Best parameter settings of NSGA-II.
Table 2. Best parameter settings of NSGA-II.
ParameterNSGA-II
Best LevelLevel Value
Maximum iterations2350
Population size3200
Crossover rate20.75
Mutation rate30.1
Table 3. Comparison of performance metrics of NSGA-II with respect to storage policies for small-sized test instances.
Table 3. Comparison of performance metrics of NSGA-II with respect to storage policies for small-sized test instances.
No.INMLRNPSDMMIDSNSQM
D-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAP
12422110010014.4214.42174.44174.851.59781.395050
22422210010000163.39149.34005050
33434110010010.3834.6951.3849.862.773.855050
4343421001009.2842.355.3554.131.016.45050
52622110010000262.61262.61005050
62622210010033.3535.06178.98181.725.273.675050
73634110010021.5554.40158.27112.123.234.345050
836342100100156.7039.55162.35134.848.955.575050
928241100100197.3669.79369.75352.5958.913.015050
1028242100100101.1487.44313.43307.1814.7113.155050
1138361100100200.09160.69473.69470.7427.3420.335050
123836210010029.2935.536384.30382.683.082.625050
Average value 10010064.4647.82229219.3913.916.1945050
A bold value indicates the optimal storage policy for each metric.
Table 4. Comparison of performance metrics of NSGA-II with respect to storage policies for large-sized test instances.
Table 4. Comparison of performance metrics of NSGA-II with respect to storage policies for large-sized test instances.
No.INMLRNPSDMMIDSNSQM
D-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAP
1320410285.11001258.8699.81890.21956330.914818.8181.18
23304103761002095.9727.53230.83189.2509.4779.7890.22
3340620476.21002712.5729.75284.25012.3611.2178.34.1395.88
4350620572.11006543.81423.911,118.89680.615,395246.64.4295.58
5420410260.7100691.5235.22085.62093.3124.330.82.7997.20
6430410385.11001286.6729.13362.33303294.393.813.8386.17
7440620473.31003146.9949.66638.096827.5639.3192.79.4490.56
8450620578.21004847.9811.69704.129105.1983.8231.93.8296.19
95204102951001080.2505.41657.651721.3231.267.729.1170.9
10530410394.899.82142.51043.53277.473383.9394.7159.923.7876.23
11540620464.7982700.71327.56497.66620.5591.9291.49.8890.12
12550620580.81005690.61088.197599840.41118.71729.9590.05
Average value 78.599.82849.8855.95375.55227.81768.7157.5111.6588.36
A bold value indicates the optimal storage policy for each metric.
Table 5. Comparison of objective functions’ average values of storage policies for small-sized test instances.
Table 5. Comparison of objective functions’ average values of storage policies for small-sized test instances.
No.INMLRTransportation CostAverage Waiting TimeTotal Tardiness
D-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAP
12422115015071714646
22422292921281144329
3343413843.831.115.500
434342444433.114.500
526221114114212212105105
6262224040176156.12847.2
73634194.786.1110.554.642.48.6
83634259.856.596.9105.34.113.3
9282416680.6320307.1129110.4
1028242152152254.222382.665.4
1138361336336234.2240.856.562.4
1238362300.3300.3224.422669.171.6
A bold value indicates the optimal storage policy for each objective.
Table 6. Comparison of objective functions’ average values of storage policies for large-sized test instances.
Table 6. Comparison of objective functions’ average values of storage policies for large-sized test instances.
No.INMLRTransportation CostAverage Waiting TimeTotal Tardiness
D-SLAPR-SLAPD-SLAPR-SLAPD-SLAPR-SLAP
13204102302.2294.81331.11392.3871.6930.7
23304103630.65812258.22477.21411.41582
3340620411881056.93687.33690.92595.62551.3
435062051257.3983.17251.37100.26122.85881.7
54204102404.8396.21648.41675.211071131.8
64304103696.96762449.82493.116201646.2
7440620410338514792.55039.837233935.9
845062051096.9900.56793.36883.55415.65466.9
95204102554.9515.31118.11257.9698.2848.2
105304103710.5643.12447.72654.51663.51834.3
1154062041395.41117.44713.34808.33595.33653.3
1255062052255.91921.96755.17247.85238.55719
A bold value indicates the optimal storage policy for each objective.
Table 7. Input parameters of demonstrative example.
Table 7. Input parameters of demonstrative example.
JobItemRTDTRRCostDirect SuccessorsProcessing Time
M1M2M3M4
12333242, 3, 4, 5, 8, 9, 11, 14, 15, 17, 2017108
223393245, 6, 8, 9, 13, 15, 17, 1817108
31333244, 6, 7, 8, 10, 12, 16, 17, 18, 19, 206614
424898247, 13,14, 18, 1917108
533484227, 8, 10, 11, 14, 16, 17, 18, 19, 202577
632161227, 9, 13, 15, 16, 192577
7143832410, 13, 15, 17, 196614
8227772410, 13, 14, 15, 18, 19, 2017108
928482412, 14, 15, 16, 1717108
10124742411, 12, 13, 16, 17, 186614
11226662412, 13, 15, 17, 1817108
12317572214, 16, 17, 19, 202577
1331312216, 17, 18, 202577
14347872216, 18, 19, 202577
15315122 2577
16221612417, 2017108
172471072419, 2017108
181319124 6614
191245424206614
2015011024 6614
RT, DT, RR, M: Ready Time (T); due date (DT); required resources (RR); machine (M).
Table 8. Travel times of demonstrative example.
Table 8. Travel times of demonstrative example.
MHE/LocationL1L2L3L4L5L6L7L8L9L10
M14455644556
M25566855668
M34455644556
M44466744667
Table 9. Eligibility constraints of demonstrative example.
Table 9. Eligibility constraints of demonstrative example.
MHE/Job1234567891011121314151617181920
100101110010111100111
211111111111111111111
300001100000111100000
411110011111000011111
Table 10. The setup times of the demonstrative example on MHE 1.
Table 10. The setup times of the demonstrative example on MHE 1.
From/to1234567891011121314151617181920
044535445363364654454
104443533634355664344
250453463335556336364
344054333445435534445
464504365454633356555
544350653556356363553
655664056643653365353
763653406364356335654
844645350563366335564
966666365055454533453
1044464445606666445334
1145445635550356653464
1266656353334033556454
1345446665633405656633
1435554555563660556353
1533464564655633056336
1633563635465364305334
1763563665356545340663
1853465633456446445044
1935563344336656664504
2044565356366644433430
Table 11. The setup times of the demonstrative example on MHE 2.
Table 11. The setup times of the demonstrative example on MHE 2.
From/to1234567891011121314151617181920
033344444334545456665
105344365355464666444
250546533564343353665
366053564445433336356
453403446445644346663
553450435633433635664
643634053435634355364
766343405565434635466
843554360636564643345
946656566055354354536
1033346533505556366634
1153454665630644363545
1244564466654045635345
1354563655664305333546
1433353553535630366356
1544663443435665036355
1636453644364534405356
1756354563434464630636
1834346444643343356044
1946334345555556356603
2064556456636563353450
Table 12. The setup times of the demonstrative example on MHE 3.
Table 12. The setup times of the demonstrative example on MHE 3.
From/to1234567891011121314151617181920
064654656335663555344
104635655336355656563
240453354465645563553
354064635533645633543
434505353643634456646
533540346433466665446
653455043633333553356
745555504653565465446
844565350553643434344
963343555035334544435
1066665564303345564465
1153644656350346343455
1236643463345045353365
1355436545636405633654
1464345545533360355664
1536654664463445054463
1635535464454343505654
1765656336565346640663
1863443654655546335036
1964564433443455534303
2043535653653336664350
Table 13. The setup times of the demonstrative example on MHE 4.
Table 13. The setup times of the demonstrative example on MHE 4.
From/to1234567891011121314151617181920
035336453546344566664
104663444663446566536
260346466344444444645
333033634653556555636
446304545333454453354
565650663535346564363
634665046433465543355
743436603534564543556
866444530563434663446
964456436054563354463
1044544436605554636433
1165335633540464456536
1256355436354033446443
1366334464643403463545
1435655463443450634346
1566364556563455056643
1635633336634335406533
1763536343464635330445
1836564554554345646065
1943663454363533553506
2043335655433444654560
Table 14. Obtained averages of objective functions of non-dominated solutions for ten replications of demonstrative example.
Table 14. Obtained averages of objective functions of non-dominated solutions for ten replications of demonstrative example.
ReplicationD-SLAPR-SLAP
Transportation CostAvg. Waiting TimeTotal TardinessTransportation CostAvg. Waiting TimeTotal Tardiness
1295.91472.611016.72277.341625.51154.66
2294.481563.341099.1285.721397.04947.05
3279.021430.48978.25287.061396.31916.58
4248.441317.01910.78279.421544.171072.84
5237.741171.86798.63275.021780.611302.24
6204.021027.96699.34278.41664.81194.17
7277.11434.11995.88283.821558.491086.01
8287.91724.391251.2278.621661.421195.21
9258.481383.21973.26280.261603.441123.04
10172.72860.98587.27277.521607.971124.68
Average255.581338.595931.043280.3181583.9751111.648
A bold value indicates the optimal storage policy for each objective.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saleh, R.M.; Abdelmaguid, T.F. Comparative Analysis of Dedicated and Randomized Storage Policies in Warehouse Efficiency Optimization. Eng 2025, 6, 119. https://doi.org/10.3390/eng6060119

AMA Style

Saleh RM, Abdelmaguid TF. Comparative Analysis of Dedicated and Randomized Storage Policies in Warehouse Efficiency Optimization. Eng. 2025; 6(6):119. https://doi.org/10.3390/eng6060119

Chicago/Turabian Style

Saleh, Rana M., and Tamer F. Abdelmaguid. 2025. "Comparative Analysis of Dedicated and Randomized Storage Policies in Warehouse Efficiency Optimization" Eng 6, no. 6: 119. https://doi.org/10.3390/eng6060119

APA Style

Saleh, R. M., & Abdelmaguid, T. F. (2025). Comparative Analysis of Dedicated and Randomized Storage Policies in Warehouse Efficiency Optimization. Eng, 6(6), 119. https://doi.org/10.3390/eng6060119

Article Metrics

Back to TopTop