Artiﬁcial Bee Colony Algorithm with Nelder–Mead Method to Solve Nurse Scheduling Problem

: The nurse scheduling problem (NSP) is an NP-Hard combinatorial optimization scheduling problem that allocates a set of shifts to the group of nurses concerning the schedule period subject to the constraints. The objective of the NSP is to create a schedule that satisﬁes both hard and soft constraints suggested by the healthcare management. This work explores the meta-heuristic approach to an artiﬁcial bee colony algorithm with the Nelder–Mead method (NM-ABC) to perform efﬁcient nurse scheduling. Nelder–Mead (NM) method is used as a local search in the onlooker bee phase of ABC to enhance the intensiﬁcation process of ABC. Thus, the author proposed an improvised solution strategy at the onlooker bee phase with the beneﬁts of the NM method. The proposed algorithm NM-ABC is evaluated using the standard dataset NSPLib, and the experiments are performed on various-sized NSP instances. The performance of the NM-ABC is measured using eight performance metrics: best time, standard deviation, least error rate, success percentage, cost reduction, gap, and feasibility analysis. The results of our experiment reveal that the proposed NM-ABC algorithm attains highly significant achievements compared to other existing algorithms. The cost of our algorithm is reduced by 0.66%, and the gap percentage to move towards the optimum value is 94.30%. Instances have been successfully solved to obtain the best deal with the known optimal value recorded in NSPLib


Introduction
In a hospital, various operations are performed; nurse rostering is a resource allocation problem. Every day, work is divided into three periods: day shift, noon shift, and night shift. The process consists of allocating workload to the nurses periodically by considering hospital terms, namely constraints and requirements, for a scheduling period of one month. Constraints are classified as hard and soft constraints; hard constraints are needed to be satisfied when allocating the roster. The soft constraints are considered when the 1.
A hybrid meta-heuristic algorithm, namely artificial bee colony optimization with Nelder-Mead Method, is proposed.

2.
The search capability of ABC is enriched with the aid of the Nelder-Mead method, which consists of search strategies such as midpoint, reflection, expansion, contraction, and shrinkage processes. These search strategies enhance the balance between exploration and exploitation. 3.
NM-ABC is implemented and tested on the nurse scheduling problem (NSPLib). 4.
The performance of NM-ABC is compared with that of some classical optimization algorithms.
The rest of the paper is structured as follows: Section 2 illustrates the scientific formulation of NSP. In Section 3, the proposed artificial bee colony algorithm with NM is discussed. Section 4 shows the applicability of NM-ABC to solve NSP with experimental analysis. Section 5 deliberates a detailed analysis of the performance outcome of NM-ABC to solve NSP. Finally, Section 6 concludes the research work.

Nurse Scheduling Problem
The NSP can be described as allocating a set of nurses to a group of shifts for a given time. The constraints are organized by hospital regulations, nurse preferences, nurse requests, and working practices. Generally, NSP is divided into two different types of constraints: they are hard and soft constraints. Hard constraints are the regulations that must be gratified to achieve a feasible solution. The roster pattern generated should satisfy all hard constraints; generally, the general hard constraint is allocating shifts to the restricted number of nurses. Soft constraints are desirable and not obligatory but must be satisfied as much as possible. The soft constraints will determine the quality of the roster formed. The violation of any soft constraints leads to the penalty of the solution. The general soft constraint in NSP is to balance the workload among all nurses and usage of nurse resources efficiently [38,39]. The visual representation of NSP is illustrated in Figure 1.
The problem of NSP consists of a set of nurses N = {1, 2, . . . , i} that is allocated a set of shifts S = {1, 2, . . . , j} for the scheduled period D = {1, 2, . . . , k} in days. The shift pattern r consists of the allocation of shifts for the particular nurse over the expected period for the 0/1 matrix. The main objective of the NSP is to identify a possible solution such that the total cost is minimized. The solution representation of the NSP is given as X j,d,r = 1, pattern covers shi f t j on day d 0, otherwise smallest number , of nurses essential for the shift on the scheduled day can be restricted by using Equation (5).
, ∈ {0,1} The Equations (4)- (6) are the coverage constraints considered in solving the NSP. All the hard constraints in the NSP are mandatory and should be considered when designing the schedule pattern. The soft constraints are included in the objective function of increasing the quality of the solution. Soft restrictions are not mandatory and enhance the quality of the schedule. Violation of soft constraints leads to penalty cost, and the hospital management suggests the penalty cost. Some of the soft constraints are listed in this work, and the soft constraints (SC) are as follows The nurse preference is the wish expressed by a particular nurse to work on a distinct shift for a specific day. The penalty cost is added to the solution if the nurse's preference is not achieved. The preference cost C i,j,d is the wish of the particular nurse i to work on shift j on a particular day d. The total preference cost of the shift pattern r for the scheduled period can be calculated as The objective function of the NSP can be formulated as where δ i is the set of a possible shift pattern for the nurse i to work on the scheduled period. Subject to ∑ δ i r=1 X j,r = 1 The feasible solution is for all nurses available in the schedule, and this constraint specifies that precisely one shift pattern is allocated to every nurse in the hospital. The smallest number M j,d of nurses essential for the shift j on the scheduled day d can be restricted by using Equation (5).
The Equations (4)- (6) are the coverage constraints considered in solving the NSP. All the hard constraints in the NSP are mandatory and should be considered when designing the schedule pattern. The soft constraints are included in the objective function of increasing the quality of the solution. Soft restrictions are not mandatory and enhance the quality of the schedule. Violation of soft constraints leads to penalty cost, and the hospital management suggests the penalty cost. Some of the soft constraints are listed in this work, and the soft constraints (SC) are as follows SC1: In this constraint, the nurse has restricted work on a specific day. The constraint can be evaluated as The nurse can request not to work on a specific shift. The constraint can be SC4: The nurse can request to work on a specific shift. The constraint can be determined as SC5: No single work shift between two days off. The constraint can be calculated as SC6: The nurses are not allowed to work more than three consecutive days. The constraint can be

Artificial Bee Colony Algorithm
Karaboga and Bahriye developed an artificial bee colony algorithm (ABC), inspired by honey bees' natural behavior. The intelligent behavior of honey bees helps to find the near-optimal solution for the optimization problem [32]. ABC is the population-based algorithm and consists of three groups of honey bees: employed bee, onlooker bee, and scout bee. The colony consists of an equal number of employed bees and onlooker bees. Each solution in the population is held by one employed bee. The employed search for the food source and share the direction of the food source with onlooker bees through waggle dance. Based on the probability calculation, the higher-quality food sources are selected by the onlooker bee phase, and the bees continue further searches. The low-quality food sources are rejected, and employed bees are converted to scout bees. The scout bee will search for a new food source or food position.

Initialization
The initialization of the population is created with food source FS for n-dimensional vectors. The population of the solution is represented as X i = {x i,1 , x i,2 , . . . , x i,n }. The food source in the population is generated using Equation (13).
The food sources are randomly distributed to employed bees. The objective calculation for the solution is evaluated accordingly.

Employed Bee Phase
In the employed bee phase, the candidate solution is generated, and the position of the food sources is monitored. The candidate solution can be developed using Mathematics 2022, 10, 2576 6 of 24 where j = 1, 2, . . . , S, and we choose the value of k as different from I, k = 1, 2, . . . , FS, and the value of Ø ranges (−1, 1). The greedy selection is made between v i and x i based on the fitness calculation. If the fitness value of v i is greater than x i , the solution x i is replaced by v i .

Probability Calculation
After fitness calculation, the employed bees share the direction of the food source with onlooker bees. The onlooker bees evaluate the fitness of the employed bee's solution using the probability value p i . The solution's probability and fitness calculation are shown in detail in Algorithm 1. :

Onlooker Bee Phase
Based on the probability value p i , Each onlooker bee randomly chooses the food source x i with probability p i . Onlooker bee performs modification on x i using Equation (14). To select the best solution among x i and v i , the greedy method is used, which is similar to the employed bee phase.

Scout Bee Phase
After employed and onlooker bees search, the food source is abandoned if the solution cannot be improved and exhausted for the predefined number of iterations. The corresponding employed bee develops a scout bee and explores new food sources using Equation (13).

Nelder-Mead Method
The NM method is used to find the local minimum for the given function and is represented as a simple triangle with three vertices. The worst vertex is found among the triangle, rejected for the next iteration, and replaced with a new vertex. The search continues toward the best solution in the triangle sequence, reducing the triangle size. At last, the vertex with a minimum point is chosen as the best solution.
Let f (x, y) be the function to be minimized; start with three vertices of a triangle. The fitness function is evaluated for all three points of the triangle. The vertices are ordered based on the fitness value: best (I), good (J), and worst (K) vertices are ordered. NM method works on four operations: reflection, expansion, contraction, and shrinkage.

Midpoint (M)
The midpoint between the line joining vertices is calculated for the first two best and good vertices using

Reflection (R)
The function increases towards the side of the triangle when we move from the worst vertex to the best vertex and decreases when moving from the worst to the excellent vertex. The test point reflection point R is chosen along the side of I J. To find R, calculate the midpoint of the best and good vertex because the best solution is away from the worst vertex. Draw a line joining K and M of length d. Place R extending at a distance d. The formula to calculate R is

Expansion (E)
The fitness calculation at vertex R is calculated, and if it is less than K, then the search is moved towards a minimum value. Now, extend the line segment through the vertex R to E by the distance d. If the fitness value at vertex E is better than R, then it is towards the minimum value. The formula to calculate E is given by

Contraction (C)
When the fitness value of R and K are small, another vertex in the triangle is needed to continue the process. Then, contraction towards midpoint without replacement is performed. The contraction points C 1 and C 2 are drawn along the line joining KM and MR for the length of d 2 . The formula to calculate C is

Shrinkage (S)
The fitness value at C is calculated, and if it is not less than K, then the vertices J and K should be shrunk towards the best vertex I. The vertex J is replaced with M, and K is replaced with S. The point S is the midpoint of I and K. The process is continued until the minimum value is found. The detailed description of the flow of the NM method is shown in Algorithm 2. Produce new food source v i using modified Nelder-Mead Method 2: Let v i denote list of vertices 3: The function increases towards the side of the triangle when we move from the worst vertex to the best vertex and decreases when moving from the worst to the excellent vertex. The test point reflection point is chosen along the side of . To find R, calculate the midpoint of the best and good vertex because the best solution is away from the worst vertex. Draw a line joining and of length . Place extending at a distance . The formula to calculate is

Expansion (E)
The fitness calculation at vertex is calculated, and if it is less than , then the search is moved towards a minimum value. Now, extend the line segment through the vertex to by the distance . If the fitness value at vertex is better than , then it is towards the minimum value. The formula to calculate is given by

Contraction (C)
When the fitness value of and are small, another vertex in the triangle is needed to continue the process. Then, contraction towards midpoint without replacement is performed. The contraction points and are drawn along the line joining and for the length of 2 . The formula to calculate is

Shrinkage (S)
The fitness value at is calculated, and if it is not less than , then the vertices and should be shrunk towards the best vertex . The vertex is replaced with , and is replaced with . The point is the midpoint of and . The process is continued until the minimum value is found. The detailed description of the flow of the NM method is shown in Algorithm 2. Order the vertices from deepest fitness function ƒ(v_1) to maximum fitness function ƒ(〖v〗_(n + 1)) 7: ƒ( ) ≤ ƒ( ) ≤ ⋯ ≤ƒ( ) 8: Calculate midpoint for best two vertices 9: = ∑ , where i = 1, 2, …, n 10: Calculate reflection point v_r 11: = + ɽ( − )

12:
if ƒ( ) ≤ ƒ( ) ≤ƒ( ) then 13: ← and go to stopping criteria , µ, λ, and ζ are the constants of likeness, extension, shrinkage, and contraction 4: f is the fitness method to be reduced 5: For i = 1, 2, . . . , n + 1 vertices, do 6: Order the vertices from deepest fitness function f (v_1) to maximum fitness function f ( The function increases towards the side of the triangle when we move from the worst vertex to the best vertex and decreases when moving from the worst to the excellent vertex. The test point reflection point is chosen along the side of . To find R, calculate the midpoint of the best and good vertex because the best solution is away from the worst vertex. Draw a line joining and of length . Place extending at a distance . The formula to calculate is

Expansion (E)
The fitness calculation at vertex is calculated, and if it is less than , then the search is moved towards a minimum value. Now, extend the line segment through the vertex to by the distance . If the fitness value at vertex is better than , then it is towards the minimum value. The formula to calculate is given by

Contraction (C)
When the fitness value of and are small, another vertex in the triangle is needed to continue the process. Then, contraction towards midpoint without replacement is performed. The contraction points and are drawn along the line joining and for the length of 2 . The formula to calculate is

Shrinkage (S)
The fitness value at is calculated, and if it is not less than , then the vertices and should be shrunk towards the best vertex . The vertex is replaced with , and is replaced with . The point is the midpoint of and . The process is continued until the minimum value is found. The detailed description of the flow of the NM method is shown in Algorithm 2. Order the vertices from deepest fitness function ƒ(v_1) to maximum fitness function ƒ(〖v〗_(n + 1)) 7: ƒ( ) ≤ ƒ( ) ≤ ⋯ ≤ƒ( ) 8: Calculate midpoint for best two vertices 9: = ∑ , where i = 1, 2, …, n 10: Calculate reflection point v_r 11: 13: ← and go to stopping criteria v The function increases towards the side of the triangle when we move from the worst vertex to the best vertex and decreases when moving from the worst to the excellent vertex. The test point reflection point is chosen along the side of . To find R, calculate the midpoint of the best and good vertex because the best solution is away from the worst vertex. Draw a line joining and of length . Place extending at a distance . The formula to calculate is

Expansion (E)
The fitness calculation at vertex is calculated, and if it is less than , then the search is moved towards a minimum value. Now, extend the line segment through the vertex to by the distance . If the fitness value at vertex is better than , then it is towards the minimum value. The formula to calculate is given by

Contraction (C)
When the fitness value of and are small, another vertex in the triangle is needed to continue the process. Then, contraction towards midpoint without replacement is performed. The contraction points and are drawn along the line joining and for the length of 2 . The formula to calculate is

Shrinkage (S)
The fitness value at is calculated, and if it is not less than , then the vertices and should be shrunk towards the best vertex . The vertex is replaced with , and is replaced with . The point is the midpoint of and . The process is continued until the minimum value is found. The detailed description of the flow of the NM method is shown in Algorithm 2.

Reflection (R)
The function increases towards the side of the triangle when we move from the w vertex to the best vertex and decreases when moving from the worst to the excellent tex. The test point reflection point is chosen along the side of . To find R, calcu the midpoint of the best and good vertex because the best solution is away from the w vertex. Draw a line joining and of length . Place extending at a distance . formula to calculate is = 2 −

Expansion (E)
The fitness calculation at vertex is calculated, and if it is less than , then search is moved towards a minimum value. Now, extend the line segment through vertex to by the distance . If the fitness value at vertex is better than , the is towards the minimum value. The formula to calculate is given by = 2 −

Contraction (C)
When the fitness value of and are small, another vertex in the triangle is nee to continue the process. Then, contraction towards midpoint without replacement is formed. The contraction points and are drawn along the line joining and for the length of 2 . The formula to calculate is = +

Shrinkage (S)
The fitness value at is calculated, and if it is not less than , then the vertic and should be shrunk towards the best vertex . The vertex is replaced with , is replaced with . The point is the midpoint of and . The process is contin until the minimum value is found. The detailed description of the flow of the NM met is shown in Algorithm 2. Algorithm 2: Nelder-Mead Method 1: Produce new food source using modified Nelder-Mead Method 2: Let denote list of vertices 3: ɽ, μ, λ, and ζ are the constants of likeness, extension, shrinkage, and contraction 4: ƒ is the fitness method to be reduced 5: For i = 1, 2, …, n + 1 vertices, do

= ∑
, where i = 1, 2, …, n 10: Calculate reflection point v_r 11: 13: ← and go to stopping criteria v n ← v r and go to stopping criteria 14: End if 15: v n ← v e and go to stopping criteria 18: v n ← v e and go to stopping criteria 21: else 22: v n ← v r and go to stopping criteria 23: End if 24: Calculate contraction point v c 25: Compute outside contraction 27:

28:
End if Shrinkage is done between v m and the best vertex among v r and v n+1 . 35: v n ← v c and go to Stopping criteria 38: else go to Shrinkage phase 39: . . , n + 1 47: End for

48:
Determine the new vertices of the simplex thus formed based on their fitness and continue with the process of the reflection phase

Nelder-Mead Method-Based ABC (NM-ABC)
ABC algorithm is lithe to improve and progress; the intricacy of the algorithm is reduced since it uses fewer parameters. The improved search ability aids in attaining optimal solutions with less computational time and increased convergence speed. ABC is good at examination but fails in exploitation [40]. An improved local search algorithm is incorporated in ABC to tradeoff the search [41].
The Nelder-Mead method is the famous local search algorithm, and it is simple and efficient. It is also proficient at embedding in other global search algorithms. However, it is entrapped in a local optimum solution. Thus, it is poor at exploration and has low convergence towards the initial position. Nelder-Mead (NM) method is good at the exploitation process but poor in the exploration process. This paper uses the NM method in ABC to improve the exploitation. The detailed description of the proposed algorithm's pseudo code is shown in Tables 1-3. The workflow of the proposed NM-ABC is shown in Figure 2. The proposed NM-AMC algorithm is presented in Algorithm 3.      NM method is used in the onlooker bee phase of the ABC algorithm. The intention is to use the NM method in the onlooker bee phase instead of the employee bee phase since the individuals participating in the onlooker bee phase are selected based on probability. When the individual is chosen with probability, it is considered a quality individual in terms of fitness. Intensive search on quality individuals results in global optimum rather than searching on random individuals.

Experimental Setup
The NSP datasets are taken from the library NSP lib and consist of 25 to 100 instances; each contains "N" nurses, "D" days, and "S" shift patterns. The nurses' N varies from 25 to 100 nurses, and the schedule is to be made for given D days with S shift patterns. NSP lib contains cases to describe the maximum and minimum utilization of resources for the health care unit. It consists of "N" nurses, "D" days, and "S" shift patterns and provides the coverage matrix for days with shift patterns. The nurses' workload for the shift concerning a particular day relates to N and D for handling 25 to 100 instances.
The performance of NM-ABC for NSP is evaluated using the NSPLib dataset [42]. The dataset is accessed on 10 October 2021 from the specified link (https://www.projectmanagement. ugent.be/research/personnel_scheduling/nsp). The author summarizes the characteristic of the dataset in Table 1. The proposed technique to solve NSP is implemented with the help of MATLAB 2016a under Windows on an Intel i5 processor with 8 GB of RAM and 1 TB storage. Table 1 designates the cases utilized by NM-ABC to solve NSP.
The parameter settings of NM-ABC and compared algorithms are presented in Table 2. In this experimentation, algorithms M1 to M5 and NM-ABC consist of a population size of 100, and the maximum number of iterations is 1000. These algorithms will stop its execution when the maximum iterations or optimal solution are reached. The association of the results obtained by NM-ABC clearly specifies that the proposed technique is relatively better than the existing methods.

Performance Metrics
The performance of the proposed technique is evaluated by relating it to five different opponent methods. Here, eight performance metrics are utilized to evaluate the experimental results listed in this section.

Average Best Time (ABT)
The best time is to achieve the best value for a particular instance. The average best time is the type of the best times of all test cases taken from the dataset. In experimental analysis, the average best time (ABT) is computed using Time is taken to achieve the best value Total number of instances (19) where n is the number of test cases in the given data set.

Standard Deviation
Standard deviation (SD) is the portion of distribution among set values from its mean value. Average standard deviation determines the type of the standard deviation of all test cases taken from the dataset. The average standard deviation (ASD) can be measured using (20) where n is the number of instances in the given data set.

Least Error Rate
The least error rate (LER) is the variance between the actual optimal value and the obtained best value. The LER can be calculated using

Success Percentage
Success percentage is the number of instances that attain optimal value for the given instance. Average success percentage (ASP) is the average number of models that achieve optimal value to the total number of test cases taken from the dataset. The value of ASP can be computed using Number of instances succeed to attain optimal value Total number of instances * 100 (22) where n is the number of instances in the given data set.

Cost Reduction
Cost reduction is the variation between actual cost in NSPLib and the cost obtained from our approach. Average cost reduction (ACR) is the average cost reduction to the total number of test cases taken from the dataset. The value of ACR can be computed from Cost i − Cost NSPLib Total number of instances (23) where n is the number of test cases in the given data set.

Gap
The value of gap is the distance to attain the best deal. The average gap (AGap) is the average distance to achieve the best value from all instances of the total number of cases taken from the dataset. The value of AGap is calculated using where n is the number of instances in the given data set.

#Both Feasible Solution
#Both feasible solution (#BFS) is the number of feasible solutions found to obtain optimal value concerning both NSPLib and our approach's best value.

#Feasible Solution
#Feasible solution (#FS) is the number of feasible solutions found to obtain optimal value concerning known optimal values of NSPLib.

Experimental Result Analysis
The results obtained by NM-ABC with competitive methods are shown in Table 3. The performance is associated with exiting techniques; the value in the table defines the attained best value by proposed and other competitor's algorithms. The objective of NSP is to reduce cost; the lowermost principles are the best solution obtained. In the evaluation of the performance of the algorithm, the authors have considered 15 different cases of diverse sizes with five other instances. It is proven proposed NM-ABC accomplished 44 best results out of 75 instances.
The experimental results of 15 cases are summarized, and the best results are achieved within the time. The best solutions for each case in all methods are highlighted in bold font. The best values are obtained by using our proposed Algorithm 1. It is noted from Table 4 that, experimentally, NM-ABC obtained 12 best results out of 25 instances in smallersized datasets from case1 to case5, 14 best results in medium-sized datasets from case6 to case10, and 17 best results out of 25 instances in larger-sized datasets case12 to case15. The experimental results of the proposed algorithm have a high potential in exploiting search space solutions towards better results in various ways. Table 4 shows the best time, standard deviation, and the least error rate for each case recorded for ten runs. The mean value of the proposed algorithm is 1.75% reduced compared to that of other competitive methods, showing that our proposed algorithm attained a lesser worst value in addition to the best solution. The least error rate for the proposed algorithm can be calculated using Equation (21) with the known optimal value recorded in NSPLib. The standard deviation is increased by 10% compared to other competitive methods. The computational time to attain each best value is shown, and the time taken to reach the best solution for each case is tabulated under the best time in a table. Our proposed method yields 39.32% less computational time to attain the best results than other competitor methods. Table 5 describes the number of feasible solutions obtained by NM-ABC and other methods, and the table shows our proposed algorithm produced a more feasible solution than the solutions recorded in NSPLib. The solution is viable if the hard and soft constraints listed in Tables 4 and 5 are satisfied. NM-ABC satisfied several feasible solutions. #Both feasible is the number of possible solutions recorded in NSPLib or NM-ABC algorithm, and #feasible is the number of viable solutions obtained in NSPLib. Table 5 shows NM-ABC has attained 90% of a better value than known values reported in NSPLib. Compared with other methods, our algorithm outperforms with 87% of the best value, which is higher than other methods listed in Table 5. Thus, our algorithm contributes a new deterministic search and practical heuristic approach to solve NSP using NSPLib dataset.    Table 5. Comparison of the number of #both feasible and #feasible solutions obtained by the proposed Table 6 summarizes the comparison and assessment of average best time, average standard deviation, and average success percentage obtained by our proposed algorithm NM-ABC with another competitor method. In Table 4, the columns 5 and 6 describe the average best time, average standard deviation, and average success percentage of the proposed algorithm. Columns 7 to 21 describe the performance metrics of another competitor's method. Table 6. Comparison and assessment of ABT, ASD, and ASP obtained by proposed algorithm NM-ABC and competitor methods.   Figure 3 compares the average best time of NM-ABC with other methods. The best time is the time taken to attain the best value, for instance, using Algorithm 2. The time is taken to allocate and schedule nurses for a particular time without overruling hard constraints and reducing violations of soft constraints. For smaller datasets, the computational time taken by NM-ABC is reduced to 56.72%. For medium datasets, the time taken is reduced by 36.40%. For larger datasets, the time taken is reduced by 34.31% compared to other competitor methods. The ABT is calculated using Equation (19) and shows the reduced computational time to schedule nurses for a particular shift on days for the scheduled period. The HSHH algorithm consumes more computational time to solve NSP, while MAPA and BCO achieve 50% of our proposed approach.   Figure 4 shows that an increase in the standard deviation will increase the search space to obtain the best value.  Success percentage is the number of instances in attaining optimal value for the given instance. Average success percentage (ASP) is the average number of instances that obtains optimal value for the total number of cases taken from the dataset. ASP is calculated using Equation (22) and proves NM-ABC has increased the success percentage over other competing methods. In Table 7, it is observed that NM-ABC achieved a 100% success percentage except for case 3 and case 14. Our algorithm NM-ABC shows an overall 97.33% of success percentage. For smaller datasets, it is shown that the success percentage is 96% and 26% more than that of all other competitor methods. For the medium dataset, our algorithm achieved a 100% success percentage of 56.25%   Figure 4 shows that an increase in the standard deviation will increase the search space to obtain the best value.    Figure 4 shows that an increase in the standard deviation w increase the search space to obtain the best value.  Success percentage is the number of instances in attaini optimal value for the given instance. Average success percentage (ASP) is the avera number of instances that obtains optimal value for the total number of cases taken fro the dataset. ASP is calculated using Equation (22) and proves NM-ABC has increased t success percentage over other competing methods. In Table 7, it is observed that NM-A achieved a 100% success percentage except for case 3 and case 14. Our algorithm N ABC shows an overall 97.33% of success percentage. For smaller datasets, it is shown th  Success percentage is the number of instances in attaining optimal value for the given instance. Average success percentage (ASP) is the average number of instances that obtains optimal value for the total number of cases taken from the dataset. ASP is calculated using Equation (22) and proves NM-ABC has increased the success percentage over other competing methods. In Table 7, it is observed that NM-ABC achieved a 100% success percentage except for case 3 and case 14. Our algorithm NM-ABC shows an overall 97.33% of success percentage. For smaller datasets, it is shown that the success percentage is 96% and 26% more than that of all other competitor methods. For the medium dataset, our algorithm achieved a 100% success percentage of 56.25% more than other competing methods. The larger dataset NM-ABC attained a 96% success percentage, which is 48.18% more than other methods. In the competitor method, the multi-objective ant colony optimization algorithm (M4) achieved the second-best success percentage with an overall 72%.
x FOR PEER REVIEW 19 of 23 multi-objective ant colony optimization algorithm (M4) achieved the second-best success percentage with an overall 72%.    Table 7 summarizes the comparison and assessment of average cost reduction and   Table 7 summarizes the comparison and assessment of average cost reduction and average gap percentage obtained by NM-ABC with another competitor method. Table 7, the 4th and 5th columns describe the average cost reduction and average gap percentage of the proposed algorithm. Columns 6 to 15 illustrate the performance metrics of another competitor's method. Figure 6 portrays the analysis and comparison of the average cost reduction of our proposed NM-ABC with another competitor method. ACR is the difference between the best-known value observed in NSPLib and the cost obtained from our algorithm. Average cost reduction (ACR) is the average cost reduction from the dataset to the total number of instances and calculated using Equation (23). In Table 7, it is shown that NM-ABC minimized the cost of the NSP. The main objective of NSP is to reduce resource utilization, which is reflected by the cost reduction, as shown in Figure 6. Using our proposed algorithm, NM-ABC, the cost of NSP is reduced by 0.66%. For more minor instances, NM-ABC is reduced to 1.12% compared to other competing methods and is reduced to 0.11%. For medium instances, our proposed algorithm is reduced to 0.62% more than the original cost value recorded in NSPLib, and other methods reduced it to 0.70%. Compared to more significant instances, the proposed algorithm is reduced to 0.63%; compared with other competitor methods, it is reduced to 0.68%. In the proposed NM-ABC algorithm, the cost of resource utilization decreases with an increase in the dataset size.
OR PEER REVIEW 20 of 23 which is reflected by the cost reduction, as shown in Figure 6. Using our proposed algorithm, NM-ABC, the cost of NSP is reduced by 0.66%. For more minor instances, NM-ABC is reduced to 1.12% compared to other competing methods and is reduced to 0.11%. For medium instances, our proposed algorithm is reduced to 0.62% more than the original cost value recorded in NSPLib, and other methods reduced it to 0.70%. Compared to more significant instances, the proposed algorithm is reduced to 0.63%; compared with other competitor methods, it is reduced to 0.68%. In the proposed NM-ABC algorithm, the cost of resource utilization decreases with an increase in the dataset size.  Figure 7 compares the average gap percentage of NM-ABC with another competitor method. The gap percentage is the distance between the attained best value and the known optimal value recorded in NSPLib. The average gap (AGap) is the average distance to obtain the best and known value from all instances to the total number of instances. The value of AGap is calculated using Equation (23), and from Table 7, it is proven that NM-ABC attained a positive value, which shows the algorithm moved towards the best optimum value. Our algorithm achieved 94.30% of successfully solved instances to reach the best value concerning the known value recorded in NSPLib.   Figure 7 compares the average gap percentage of NM-ABC with another competitor method. The gap percentage is the distance between the attained best value and the known optimal value recorded in NSPLib. The average gap (AGap) is the average distance to obtain the best and known value from all instances to the total number of instances. The value of AGap is calculated using Equation (23), and from Table 7, it is proven that NM-ABC attained a positive value, which shows the algorithm moved towards the best optimum value. Our algorithm achieved 94.30% of successfully solved instances to reach the best value concerning the known value recorded in NSPLib. known optimal value recorded in NSPLib. The average gap (AGap) is the average distance to obtain the best and known value from all instances to the total number of instances. The value of AGap is calculated using Equation (23), and from Table 7, it is proven that NM-ABC attained a positive value, which shows the algorithm moved towards the best optimum value. Our algorithm achieved 94.30% of successfully solved instances to reach the best value concerning the known value recorded in NSPLib.

Discussions
The experiments to solve NP-hard combinatorial NSP are conducted by our proposed algorithm NM-ABC. Problem-based existing algorithms are chosen and compared with the proposed NM-ABC algorithm. The result of our proposed algorithm is compared with other competitor methods, and the best values are compared in Table 7. Various performance metrics are considered to evaluate the proposed algorithm's performance. Tables 3-7 show the outcome of our proposed algorithm and other existing method's performance. From the table and figure, it is evident that NM-ABC has more ability to attain the best value with less computation time compared to the known optimal value listed in NSPLib. The average number of function evaluations (NFEs) for proposed NM-ABC is observed concerning to number solutions updated using reflection, contraction, expansion, and shrinkage phase. We noticed that NFEs of proposed algorithm is 10 6 for all the test cases.
Compared with other existing methods, the mean value of NM-ABC is reduced by 1.75% compared to that of other competitive methods and attained a lesser worst value in addition to the best solution. The proposed method yields 39.32% less computational time to obtain best results compared to other competitor methods. The datasets are divided based on their size as smaller, medium, and large datasets; the computational time taken by NM-ABC is reduced to 56.72%, 36.40%, and 34.31%, respectively. The success percentage to attain the best value of our proposed approach is 97.33%. Compared with other methods with various-sized datasets, our algorithm achieves 26% for the smaller dataset, 56.25% for medium datasets, and 48.18% for larger datasets. The cost of our algorithm is reduced by 0.66%, and the gap percentage to move towards the optimum value is that 94.30% instances were successfully solved to obtain the best deal with the known optimal value recorded in NSPLib.
Our algorithm has proven significant performance in attaining the best solution with optimized resource utilization and nurse preferences by satisfying both hard and soft constraints. It is also shown that the existing approach solves NSP with higher utilization of resources and violation of soft constraints that lead to increased cost. The ability to distribute the workload among nurses with nurse performance and satisfaction are achieved in our algorithm. The proposed system is tested on larger datasets and works astoundingly well than the other techniques.

Conclusions
This paper solves NSP using a hybrid artificial bee colony algorithm with Nelder-Mead (NM-ABC) method. The proposed algorithm is evaluated using the NSPLib dataset, and the performance of the proposed algorithm is compared with the other five existing methods and evaluated in the NSPLib dataset. To assess the implementation of our proposed algorithm, 75 different cases of various-sized datasets were chosen for evaluation, and in that, 44 out of 75 instances achieved the best result. The evaluation of the proposed algorithm is compared with other existing techniques in terms of the best time, standard deviation, least error rate, success percentage, cost reduction, average gap, #both feasible solutions, and the number of #feasible solutions. When comparing the results of existing algorithms in metrics listed, the proposed NM-ABC outperforms in most instances of NSP. Future work of this research can be extended with more objectives in NSP for optimization.