A Modiﬁed Niching Crow Search Approach to Well Placement Optimization

: Well placement optimization is considered a non-convex and highly multimodal optimization problem. In this article, a modiﬁed crow search algorithm is proposed to tackle the well placement optimization problem. This article proposes modiﬁcations based on local search and niching techniques in the crow search algorithm (CSA). At ﬁrst, the suggested approach is veriﬁed by experimenting with the benchmark functions. For test functions, the results of the proposed approach demonstrated a higher convergence rate and a better solution. Again, the performance of the proposed technique is evaluated with well placement optimization problem and compared with particle swarm optimization (PSO), the Gravitational Search Algorithm (GSA), and the Crow search algorithm (CSA). The outcomes of the study revealed that the niching crow search algorithm is the most efﬁcient and effective compared to the other techniques.


Introduction
Optimization performs a vital function in scientific, manufacturing, and environmental processes in the modern world [1]. To solve problems of this kind, researchers use several different methods to determine the right approach for a specific problem [2,3]. Scientists are also constantly searching for more sophisticated modeling strategies [1,4,5]. The conventional exact approach should be used for smaller problems where the problems are constant and distinct [4,5]. The conventional exact solution approach cannot escape local optimum in real-world problems, as real-world problems are not always differentiable [6]. On the other side, the metaheuristic algorithm's performance is problem specific [7]. Metaheuristic techniques are used in a vast range of studies [8,9].
The multimodal optimization problem of well placement is one of the most difficult factors in the development process in the oil and gas industry. However, there is a growing body of literature that addresses the well placement optimization problem as an extremely non-smooth, non-convex cost function that includes several local optimums [10,11]. Based on contemporary research work, the optimization methods can be indexed into three major categories in this field. These three categories are (a) classical methods, (b) non-classical methods, and (c) hybrid methods [10]. In the early novel research endeavors to tackle the well placement optimization problem, the focus was mainly on classical methods, and, among them, the finite difference method [12], mixed-integer programming (MIP) [13], multivariate interpolation algorithms [14], the steepest ascent method [15] and the simultaneous perturbation stochastic approximation method [16,17] are significant. However, the biggest bottlenecks of the gradient-based techniques are that they might get trapped in local optima. Hence, gradient-based techniques hardly solve the well placement optimization problem due to its nature. However, non-classical methods do not require gradient calculation. Again, they are less likely to get trapped in local optima compared to classical methods [18][19][20]. Therefore, non-conventional methods with gradient-free approaches are considered for this problem.
Nonconventional gradient-free techniques such as ant colony optimization (ACO) [21] improved harmony search (IHS) [22], differential evolution (DE) [23,24], cuckoo search (CS) [25], the imperialist competitive algorithm (ICA) [26][27][28], the Firefly Algorithm [29], the Smart flower optimization algorithm ( [30,31]), particle swarm optimization (PSO) ( [32,33]), the covariance matrix adaptation evolution strategy (CMA-ES) [34,35], artificial bee colony (ABC) [36], the bat algorithm (BA) [37,38], and the genetic algorithm ( [39][40][41][42][43]) etc. have been applied in the optimization problem of well placement. However, such types of algorithms tend to be trapped in the optimum local and are still heavily affected by parameter tuning, although they can provide better solutions than traditional techniques. Again, researchers tend to combine non-conventional methods with classical methods [44] to create a hybrid algorithm [45]. Many researchers combined the best features of different algorithms by following this approach and developed many hybrid strategies ( [46,47]) that demonstrated superior performance in the well placement optimization problem. This form of methodology, however, lacks theoretical studies to support the combination they proposed. The gravitational search algorithm (GSA) is superior to the genetic algorithm (GA) [48]. Furthermore, in another analysis, PSO provided more net present value than GA [11]. Besides, BA provided better net present value than GA and PSO in another study [37]. Again, the contract-theoretic based framework is used in optimization techniques. Convex optimization methods and optimal contracts can be followed to solve the corresponding optimization problem in [49]. In addition, proxy techniques can provide faster convergence. However, they are susceptible to incorrect approximation, which may lead to error [50]. Hence, based on the above discussion, it can be inferred that a single algorithm cannot be depicted as a superior algorithm in this field. Nevertheless, these global optimization algorithms are still susceptible to the local optimum, and due to having a stochastic nature, the results are often unstable [51].
Again, the cost functions are discontinuous and non-convex due to the reservoir heterogeneities ( [52,53]), and contain several local optima. Besides, parameter tuning requires a considerable number of reservoir simulations due to computational expenses [54]. The trial and error technique for finding the right configuration is, therefore, impractical. Again, the selection of suitable strategies is difficult due to the lack of studies in exploration and exploitation techniques [55]. Besides, only one reservoir is used in most instances to test the algorithms' performance. The supremacy of the algorithm in this area is not defined by such an approach, as reservoir heterogeneity alters the surface of the search field. Besides, metaheuristic algorithms are stochastic. Thus, to develop a methodology in this area, statistical analysis and certain criteria should be considered. It is, therefore, necessary to perceive an effective algorithm with an intrinsic ability to solve this complex optimization problem.
Oliva et al. stated that the crow search algorithm can maximize the magnetic resonant of the brain image threshold value and prevents premature convergence [56]. The scale and location of the capacitor in the power distribution network are specifically configured by using the crow search algorithm [57]. Aleem et al. implemented the crow search algorithm and genetic algorithm [58] to design the third-order high pass filter. The CSA had demonstrated a higher degree of convergence. While the CSA has been applied in several fields, scientists have suggested numerous strategies to boost the crow search algorithm's deficits [59]. To overcome feature selection problems, Sayed et al. suggested a sine chaotic algorithm of crow search [60]. With the new approach, the classification efficiency increased considerably. o avoid local optima, Jain et al. implemented the levy flight with the CSA [59]. The awareness probability of the CSA was updated by Díaz et al. [61]. Mohammadi and Abdi improved the CSA to solve the economic load dispatch problem [62]. They have modified the flight length in their proposed algorithm. The CSA was integrated with chaos theory and applied to Parkinson's disease prediction problems and multi-objective optimization problems [63,64]. The above research improves the optimization efficiency of the crow search algorithm in specific problems. However, the primary CSA provides low-quality solutions for complex, multimodal optimization problems. Hence, the existing drawbacks of this search algorithm include its inability to avoid local optimal and poor convergence of the multimodal search region.
There have been no studies on the niching crow search algorithm (NCSA) to solve the well placement optimization problem. Therefore, this study proposes an improved crow search technique for solving multimodal benchmark functions and real-world multimodal optimization problems such as the well placement optimization problem in the oil and gas industry, and compares the results with a wide range of established algorithms. In addition, collecting the end result in terms of cost function value does not help to fully understand the internal behavior of the algorithms. Hence, this study attempts to explain the internal behavior of the algorithms on well placement optimization with graphical illustrations. In this study, two case studies are conducted to find the maximum net present value and statistical analysis is provided to find a better algorithm.

Problem Formulation
The prime motivation behind well placement optimization is to make sure that the expenditure remains at a minimum while maximizing the net profit. The well placement optimization, in general, can be formulated as: Subjected to: where u n represents well coordinates, NPV presents net profit value, LB and UB are the lower bound and upper bound of the reservoir, respectively. NPV changes randomly with the change of coordinates of the well location. Eclipse simulation was used to calculate the cumulative oil production, cumulative gas production, and cumulative water production value based on the coordinates of the well location. The variables used in (3) are depicted from [10]. Hence, NPV for a reservoir model can be formulated as: where P g denotes gas price, Q w presents cumulative water production, D is the discount rate, Q g is the cumulative gas production, T is the number of years passed since the production has started, CAPEX is the capital expenditure, P O presents oil price, C w denotes cost per unit volume of produced water, OPEX is the operational expenditure, and Q O is the cumulative oil production. The dynamic behavior of an oil reservoir can be defined by spatiotemporal differential equations. These are, in the case of a two-phase oil reservoir: where µ f is the phase f (f = w for water and f = 0 for oil) of viscosity, ρ f is the density, ε is the porosity, K is the tensor of absolute permeability, B f is the factor of formation volume, S f is the saturation, P f is the pressure, kr f is the relative permeability, and q f is the reservoir flow. The relative permeability of each step is usually defined in mathematical analysis and modeling using: where S wr and S or are the residual water and oil saturations, a and b present the exponents in Corey's correlation, respectively, kr 0 w and kr 0 o are the end-point relative permeabilities for water and oil, respectively.
The oil and water phases' pressure and saturation are interrelated by the equation: where P c is capillary pressure, P o and P w are oil and water pressure, and S 0 and S w present oil and water saturation. Then, using the following output equation (IPR or Inflow Performance Relation), the flow rate at the wellbore can be defined: where Ψ is the relation transmissibility factor for wells and the Vertical Flow Performance (VFP) curve defines the Bottom Hole Pressure (BHP): Among them, ∆P indicates the drop in frictional pressure by well tubing, ∆p is the drop in pressure due to the acceleration, L reflects the well width, ρ is the well output density, and THP is the head pressure of the tubing. The overall flow, the water/oil ratio (gas/oil ratio), and the inlet/outlet pressure define the pressure drop in the equation for a given well placement in Equation (5). Figure 1 depicts the general flowchart of well placement optimization. To address this problem, the researchers used several different methods to determine the right approach for the problem. In previous research, conventional and non-conventional optimization techniques have been applied to resolve this problem. However, both gradient-based and gradient-free optimization techniques suffer from local optima [65].

Methodology
Metaheuristic algorithms are stochastic and non-deterministic in nature. It was first developed in 1995 by Kennedy and Eberhart [66]. There are various types of metaheuristic methods like local search, simulated annealing, tabu search, variable neighbourhood search, population-based or trajectory-based search, etc. Among all the metaheuristic search algorithms, some of the most popular algorithms are the crow search algorithm, particle swarm optimization, the gravitational search algorithm, etc. Since the well placement optimization problem is considered as a multimodal optimization problem, the proposed NCSA algorithm can be applied to harmonic elimination and the economic emission dispatch problem. In Figure 2, an overview of the optimization model is depicted. In the following section, optimization techniques are discussed.

Methodology
Metaheuristic algorithms are stochastic and non-deterministic in nature. It was first developed in 1995 by Kennedy and Eberhart [66]. There are various types of metaheuristic methods like local search, simulated annealing, tabu search, variable neighbourhood search, population-based or trajectory-based search, etc. Among all the metaheuristic search algorithms, some of the most popular algorithms are the crow search algorithm, particle swarm optimization, the gravitational search algorithm, etc. Since the well placement optimization problem is considered as a multimodal optimization problem, the proposed NCSA algorithm can be applied to harmonic elimination and the economic emission dispatch problem. In Figure 2, an overview of the optimization model is depicted. In the following section, optimization techniques are discussed.

Crow Search Algorithm (CSA)
The crow search algorithm is inspired by natural events. In Figure 3, the flow chart diagram of the primary crow search algorithm is shown. Crows have astounding capacities to take care of complex problems. They closely observe other birds and watch very carefully where others try to hide their food. They attempt to take the food when the proprietor vacates their spot. The crow search algorithm is designed based on the following facts.
a. Crows live in a herd; b. Crows remember the location of concealed places of food; c. Crows can commit burglary by following the other crows; d. Crows conceal their collectives that have been robbed.

Crow Search Algorithm (CSA)
The crow search algorithm is inspired by natural events. In Figure 3, the flow chart diagram of the primary crow search algorithm is shown. Crows have astounding capacities to take care of complex problems. They closely observe other birds and watch very carefully where others try to hide their food. They attempt to take the food when the proprietor vacates their spot. The crow search algorithm is designed based on the following facts.

a.
Crows live in a herd; b.
Crows remember the location of concealed places of food; c.
Crows can commit burglary by following the other crows; d.
Crows conceal their collectives that have been robbed.
In standard CSA, the position of each crow changes according to the perception of other crows. For example, suppose crow i chases crow j to steal hidden food from crow j. Thus, the crow i updates its location to steal hidden food from crow j using the following formula: where AP t i and f i,itr l are the awareness probability and flight capability of the ith crow in the itrth iteration, respectively. r i denotes a random number for the ith crow and M j,itr denotes the memory location of the jth crow for the itrth iteration. Again, M i,itr+1 is updated based on the following equation:  In standard CSA, the position of each crow changes according to the perception of other crows. For example, suppose crow i chases crow j to steal hidden food from crow j. Thus, the crow i updates its location to steal hidden food from crow j using the following formula: where and , are the awareness probability and flight capability of the ith crow in the itrth iteration, respectively. denotes a random number for the ith crow and

Niching Crow Search Algorithm (NCSA)
The existing shortcomings of this search algorithm involve its inability to avoid the local optimum and slow convergence speed for multimodal search areas, which is addressed in ref. [67]. Since the search relies entirely on a random search on (12), a faster convergence is not guaranteed. The proposed method presented here brings two changes to tackle the multimodal optimization problem, making it more practical for a range of applications without losing the attractive features of the original technique. Our proposed algorithm modifies the Crow's position update equation with the gaussian distribution ϕ and niching technique. With these incorporations, the new method improves the convergence for multimodal optimization problems and provides a better solution compared to the original crow search algorithm. The flow chart of the NCSA algorithm is shown in Figure 4 and pseudo code is shown in Algorithm 1.  for j = 1 swarm size do find euclidian distance among crow's best position; calculate fitness euclidean distance ratio using (14) find nearest best for j th crow end for for i = 1 to swarm size calculate , and AP using (10) and (11) if rand > AP Flying capacity ( f i,itr l ) and awareness probability (AP) remain consistent for the entire search process in the primary crow search algorithm. This is not appropriate for balancing exploration with exploitation. Therefore, the following equations are suggested: where the current iteration number and maximum iteration are indicated, respectively, by itr and itr max . Again, a global best operator is included and directed to adjust the position of the primary crow search algorithm in (16). The position of the global best operator is set by min M j,itr . To introduce randomness in the equation, the gaussian distribution ϕ is incorporated. If r i ≥ AP t i is true, then the new location is updated using (16) to provide higher exploration capacity. where ϕ = randn(1, D), and D represents random numbers that are subject to the gaussian standard distribution. r i is a random number, min M itr denotes the best memory location in itrth iteration, M i,itr denotes the memory location of ith crow for itrth iteration, f itr l is the flying capacity for itrth iteration. Again, if r i ≥ AP t i is false, then (10) is executed to increase exploitation capacity.
where nearest best is the location of the crow with the best fitness memory from crow i in the nearest location. This inclusion directs towards the crow with the nearest fitness memory, which offers a better multimodal optimization solution.
To locate the nearest best, the fitness Euclidean distance ratio (FER) [68] is used in (17). The crow's personal best is used in the primary crow search as a memory to preserve the best solution. In this updated crow search technique, a random crow's memory location is used rather than using a single global best. In this approach, by measuring their FER value, each crow is drawn to a neighborhood point. This technique is incorporated to accurately locate all global optima with sufficient population size. One notable benefit of this approach is it does not require niching parameter specifications.
In each iteration, the nearest best in (17) is determined using (18). To calculate the FER between two crows, crow i and crow j in the population is calculated by using the equation below: where ||s|| represents the dimensions of the search space that can be determined using its diagonal distance. p g denotes best fitness location and p w denotes the worst fitness location of crows. α is a scaling factor and α = ||s|| worst fitness value and f (p g ) is the best fitness value of crows.

Begin
Initialize positions, memory positions randomly and set crows size (D), swarm size of all crows (k), and maximum number of iterations (iter max ); set iter = 0; while iter < iter max do set itr = itr+1; for j = 1 swarm size do find euclidian distance among crow's best position; calculate fitness euclidean distance ratio using (14) find nearest best for j th crow end for for i = 1 to swarm size calculate f i,itr l and AP using (10) and (11) if rand > AP calculate X i,itr+1 using (12) else calculate X i,itr+1 using (13) end if evaluate the cost function; update memory position; do a local search; end for end while In the niching technique, swarms may fluctuate around the global best, but they can not reach it because of their poor local search capabilities. Hence, a local search technique is implemented and the pseudo code is shown in Algorithm 2. This technique uses local searches similar to [69], where a solution is generated near the crow's memory position with a small step. Therefore, a better solution is given by the local search process. Besides, it speeds up the convergence.

Computational Complexity
Due to the incorporation of the niching technique and local search, computational complexity is increased. The local search intensely searches around the global best to fine-tune the obtained result. Again, in the niching search, measurement of euclidian distances between crows is required, which adds more computation expense. However, this study is related to billion-dollar projects. Hence, such complexity can be considered if better results are obtained.

Benchmark Functions
The classic Benchmark functions used to evaluate the performance of the proposed algorithm are shown in Appendix A [70]. The benchmark functions in Appendix A can be divided into three categories: unimodal, multimodal, and fixed-dimensional multimodal. For evaluation, 30 particles and 1000 iterations are considered for each algorithm. All the algorithms run 30 times, and the statistics (such as mean and standard deviation) are recorded in Tables 1-3. To demonstrate the superiority of the algorithm, we compared the results of the algorithm with Particle swarm optimization (PSO) [71], gravitational search algorithm (GSA) [71], and primary crow searches. Statistically, of the 23 test functions, the niche crow search (NCSA) algorithm is superior in 15 functions, the PSO is superior for 7 functions, the CSA is superior for 8 functions, and the GSA is superior for 6 functions.

Exploitation Analysis
To test the exploitation analysis of the proposed algorithm, the unimodal benchmark functions f 1 to f 7 from Appendix [A] are used. In Table 1, the results imply that NCSA is a better algorithm than PSO, GSA, and the crow search algorithm. Therefore, NCSA improves the exploitation capacity of crow search and outperforms all other algorithms listed in f 1 to f 7 in Appendix A, except in F6.

Exploration Analysis
The benchmark functions from f 8 to f 13 (Appendix A) are suitable for the exploration ability test. These multimodal functions have many local optimums, and finding the global best is very difficult. Since the focus of the study is to tackle multimodal optimization problems, the proposed algorithm should work better in these benchmark functions. According to Table 2, the proposed NCSA was better than CSA. Tables 2 and 3 demonstrates that the proposed NCSA algorithm outperforms other algorithms in f 8 , f 9 , f 10 , f 11 , f 15 , f 16 , f 17, f 18 , and f 19 benchmark functions. In other cases, it offers competitive results. Therefore, the results show that NCSA is superior in terms of exploration capacity.

Convergence Analysis
Since the focus of the study is to tackle the multimodal optimization problem, in Figure 5 the convergence curves of the benchmark functions from f 8 to f 13 are shown. In these Figures, the X-axis shows the number of iterations, and the Y-axis shows the fitness value of the reference function. From Figure 5, it can be concluded that NCSA provides better convergence and better solutions for multimodal problems than CSA. In addition, Figure 6 shows the search history for the NCSA technique.
Since the focus of the study is to tackle the multimodal optimization problem, in Figure  5 the convergence curves of the benchmark functions from f8 to f13 are shown. In these Figures, the X-axis shows the number of iterations, and the Y-axis shows the fitness value of the reference function. From Figure 5, it can be concluded that NCSA provides better convergence and better solutions for multimodal problems than CSA. In addition, Figure 6 shows the search history for the NCSA technique.   (e) (f)

Well Placement Optimization
To optimize the well placement problem in these reservoirs PSO, CSA, GSA, a NCSA are used. The Eclipse simulator will provide production data for specific w placement. The optimization algorithm will provide the specific location of the well. E algorithm was run 16 times. In each trial, the number of iterations and particles for algorithms were 100 and 20, respectively. The parameters used in these algorithms a

Well Placement Optimization
To optimize the well placement problem in these reservoirs PSO, CSA, GSA, and NCSA are used. The Eclipse simulator will provide production data for specific well placement. The optimization algorithm will provide the specific location of the well. Each algorithm was run 16 times. In each trial, the number of iterations and particles for all algorithms were 100 and 20, respectively. The parameters used in these algorithms and the economic parameters used to evaluate (4) are shown in Tables 4 and 5. To perform the experimental tests, for case study 1, the PUNQ-S3 reservoir model is considered. PUNQ-S3 is a real field-based synthetic reservoir model used in Elf Exploration production. Details of the reservoir model can be found in [73]. The PUNQ-S3 has a 19 × 28 × 5 grid block. In this study, the authors have considered four vertical wells for optimization purposes. Each well has coordinates (x, y). Therefore, the total number of variables optimized in this experiment is 2 × 4. Figure 7 shows the initial condition for the PUNQ-S3 reservoir in case study 1.
Again, to run the second case study, the authors have considered the first SPE model. The first SPE model is a synthetic reservoir model based on a three-dimensional black oil reservoir simulation problem. Details of the reservoir model can be found in [74]. The first SPE model has a 10 × 10 × 3 grid block. In this study, two vertical well locations were optimized. Therefore, the total number of variables to be optimized in this experiment is 2 × 2. Figure 8 shows the initial condition for the SPE-1 reservoir in case study 2.
To perform the experimental tests, for case study 1, the PUNQ-S3 reservoir model is considered. PUNQ-S3 is a real field-based synthetic reservoir model used in Elf Exploration production. Details of the reservoir model can be found in [73]. The PUNQ-S3 has a 19 × 28 × 5 grid block. In this study, the authors have considered four vertical wells for optimization purposes. Each well has coordinates (x, y). Therefore, the total number of variables optimized in this experiment is 2 × 4. Figure 7 shows the initial condition for the PUNQ-S3 reservoir in case study 1. Again, to run the second case study, the authors have considered the first SPE model. The first SPE model is a synthetic reservoir model based on a three-dimensional black oil reservoir simulation problem. Details of the reservoir model can be found in [74]. The first SPE model has a 10 × 10 × 3 grid block. In this study, two vertical well locations were optimized. Therefore, the total number of variables to be optimized in this experiment is 2 × 2. Figure 8 shows the initial condition for the SPE-1 reservoir in case study 2.

Input Data
Each well has coordinates (x, y). The total number of variables optimized in this experiment is 2 × 4. and 2 × 2. The PUNQ-S3 has a 19 × 28 × 5 grid block and the first SPE model has a 10 × 10 × 3 grid block. Hence, in case study 1, Again, in case study 2, x i = (x i1 , x i2 , . . . , x iD ); 1 ≤ x i ≤ 10, The u vector represents the well locations matrix, hence the ith particle's location can be denoted by where u can be expressed as the input matrix in Equation (4). This study aims at maximizing the net present value by changing well locations. Initially, u is generated randomly within the upper limit and lower limit. Then, u is used in Equation (4) to calculate the net present value. After that, the algorithm searches for well locations which will give the maximum net present value.

Convergence Analysis
Convergence curves are an important tool for analyzing the convergence speed of an algorithm and comparing its performance with other algorithms. Figure 9 shows a graph of the average NPV and number of iterations for the GSA, PSO, CSA, and NCSA algorithms. Case study 1 used 100 iterations and case study 2 used 30 iterations [54]. In case study 1, the search space is a 19 × 28 × 5 grid blocks. Case study 2 has 10 × 10 × 3 grid blocks. Therefore, for case study 1, the search space is larger than case study 2. Hence, for case study 1, the calculation requires more effort to find an optimized location. Since we are using the PUNQS3 reservoir, the previous study used 100 iterations [75]. Figure 9b shows that the best results, after a particular time, have not changed in case study 2. Basically, the convergence curve showed a horizontal line after a certain point. The horizontal line in Figure 9b shows that the search technique has reached the optimum value and there is no further improvement possible. Therefore, for case study 2, 30 iterations are sufficient.

Convergence Analysis
Convergence curves are an important tool for analyzing the convergence speed of an algorithm and comparing its performance with other algorithms. Figure 9 shows a graph of the average NPV and number of iterations for the GSA, PSO, CSA, and NCSA algorithms. Case study 1 used 100 iterations and case study 2 used 30 iterations [54]. In case study 1, the search space is a 19 × 28 × 5 grid blocks. Case study 2 has 10 × 10 × 3 grid blocks. Therefore, for case study 1, the search space is larger than case study 2. Hence, for case study 1, the calculation requires more effort to find an optimized location. Since we are using the PUNQS3 reservoir, the previous study used 100 iterations [75]. Figure 9b shows that the best results, after a particular time, have not changed in case study 2. Basically, the convergence curve showed a horizontal line after a certain point. The horizontal line in Figure 9b shows that the search technique has reached the optimum value and there is no further improvement possible. Therefore, for case study 2, 30 iterations are sufficient.
In Figure 9a,b, it is clear that NCSA is superior to other algorithms for finding a better NPV value. The next best algorithm is PSO. PSO achieved the second-best NPV. GSA and CSA were stuck in a local optimum. In general, in both case studies, NCSA converged faster and had the highest NPV value. However, the CSA algorithm failed to provide a satisfactory NPV.  Figure 9 suggests that, unlike CSA, modification in the search expression in NCSA helps to avoid the local optimum in convergence. Besides, the proposed changes in the NCSA will make it more vibrant across most of the test functions and both case studies. For example, in a multimodal optimization problem, NCSA is more dynamic than CSA and is further validated by the exploration and exploitation graphs shown in Figure 10. According to the convergence graph, NCSA effectively converges to the best position compared to CSA, GSA, and PSO. In Figure 9a,b, it is clear that NCSA is superior to other algorithms for finding a better NPV value. The next best algorithm is PSO. PSO achieved the second-best NPV. GSA and CSA were stuck in a local optimum. In general, in both case studies, NCSA converged faster and had the highest NPV value. However, the CSA algorithm failed to provide a satisfactory NPV. Figure 9 suggests that, unlike CSA, modification in the search expression in NCSA helps to avoid the local optimum in convergence. Besides, the proposed changes in the NCSA will make it more vibrant across most of the test functions and both case studies. For example, in a multimodal optimization problem, NCSA is more dynamic than CSA and is further validated by the exploration and exploitation graphs shown in Figure 10. According to the convergence graph, NCSA effectively converges to the best position compared to CSA, GSA, and PSO.

Exploration and Exploitation Analysis
Dimension-wise analysis sheds light on the internal behavior of the algorithms [76]. In this context, it is important to have a strong diversification to better exploit the search space. A small degree of diversification implies local convergence. To measure the diversity in each dimension, the authors have considered the equation proposed in [77].
where x ij is the dimension j of the and i th swarm, median(x j ) refers to the median of dimension j in the whole population, and n is the total number of the population. So, the average diversity can be calculated using (13): where Div is the diversity measurement of the whole population in an iteration. Hence, exploration and exploitation can be measured using the following equation: where Div max presents the maximum diversity of whole populations in one run. Unless the behavior of the swarm in the iterative process is revealed, understanding the final result of the diversity or the value of the objective function is difficult. Therefore, to realize the true significance of these strategies, this study used a graphical representation of diversity measurement to exemplify the behavior of swarms in standard CSA, PSO, GSA, and NCSA. Figure 9 shows the exploration and exploitation curve with 100 iterations. It can be inferred from the illustration that the appropriate ratio of exploration and exploitation achieved a higher NPV. On the other hand, these graphs clearly show a gradual shifting in intensity between the exploration and exploitation rate during the iterative process of PSO. However, during the NCSA's search process, the exploration and exploitation rate remained constant. From Figure 6, it can be seen that NCSA has better convergence capability than other algorithms in well location optimization problems. The graphical evidence provided by Figure 10 illustrates that NCSA had a higher average exploration rate and GSA had a relatively higher exploitation rate. In case study 1, the NCSA achieved an average diversity of 84%. However, the average diversity of PSO, GSA, and CSA is 27.5%, 6.3%, 17.4%, respectively. This indicates that the average diversity achieved by other algorithms is much lower than the NCSA. In addition, compared to other algorithms, the modified NCSA provided a better global optimal solution.
Again, in case study 2, the average diversity of NCSA, PSO, GSA, and CSA is 60.5%, 22%, 40% and 83%, respectively. In this case, CSA had the highest exploration rate. However, CSA failed to achieve a better global optimum. Hence, this result reveals that for well placement optimization, a higher exploration rate is necessary. However, the appropriate ratio of the exploration and exploitation should be carefully set according to the search field.

Performance Measurement and Statistical Analysis
To evaluate the performance of the algorithms, several criteria were considered for this problem [17,78]: Effectiveness is a straightforward indicator of performance and is the total value of the best answer found in the trials as a percentage of the global optimum, or where pˆi is the best solution found in trial i, p * is the global optimum solution, f (p) is the value of solution p, and N is the number of trials for each algorithm. Efficiency indicates the pace with which the algorithm achieves at least 98 percent of the best solution found using a specific number of evaluations, on average between tests, or where L 98 i is the number of unique function evaluations required to find solution q such that f (q) equals 98 percent of the best solution found for trial i (for minimization) and M is the total number of function evaluations per trial.
Additionally, maximum, minimum, mean, standard deviation, effectiveness, and efficiency are calculated for each algorithm in trails and listed in Tables 6 and 7. The mean value and standard deviation indicate the robustness of the algorithm. The results of case study 1 show that the NCSA algorithm is superior on four criteria. However, PSO was more efficient. The reason for this phenomenon is that efficiency indicates the speed of that algorithm to achieve a certain performance. Thus, despite achieving a lower average NPV than NCSA, PSO achieved higher efficiency. The box plot results are shown in Figure 11. Figure 11a reveals that NCSA has reached a higher NPV value compared to other algorithms. The result of case study 2 in Table 7 shows that the NCSA algorithm is superior in six criteria. However, PSO also achieved the same best value. Figure 11b shows that NCSA has the lowest standard deviation compared to other algorithms, and PSO has the second-lowest standard deviation. Table 6. Statistical data of applied metaheuristic algorithms on well placement optimization for case study 1.

Wilcoxon's Test
To validate whether the NCSA results are statistically different from the other three algorithms, a non-parametric statistical test, the Wilcoxon test [79,80], with a significance level of 0.05 is used. Table 8 shows the one tail p-value, two tail p-value, and Z values for the Wilcoxon test. A p-value less than 0.05 and a Z value higher than 1.96 is required to reject the null hypothesis. Table 8 shows that there is a statistical difference between the performance of NCSA and other algorithms except in one case. In case study 1, the performance of the NCSA and PSO are not statistically different. However, small differences in results may bring higher economic profit. Therefore, it can be inferred that NCSA can bring improved results from the other three algorithms.

Wilcoxon's Test
To validate whether the NCSA results are statistically different from the other three algorithms, a non-parametric statistical test, the Wilcoxon test [79,80], with a significance level of 0.05 is used. Table 8 shows the one tail p-value, two tail p-value, and Z values for the Wilcoxon test. A p-value less than 0.05 and a Z value higher than 1.96 is required to reject the null hypothesis. Table 8 shows that there is a statistical difference between the performance of NCSA and other algorithms except in one case. In case study 1, the performance of the NCSA and PSO are not statistically different. However, small differences in results may bring higher economic profit. Therefore, it can be inferred that NCSA can bring improved results from the other three algorithms.

Sensitivity Analysis
Design of Experiments (DOE) is a valuable tool for studying the effects of one or more variables on physical experiments [81]. Scientific experiments can only be performed on a small scale and are expensive and time-consuming. Therefore, limiting the number of parameter configurations to test is important for sensitivity studies. For this purpose, the design of experiments (DOE) was proposed by Fisher [82].
The goal of this design is to gain the greatest possible knowledge of the impact of relevant parameters on the results of the model at the lowest cost. Typically, in computer models, this cost is to keep computer time low by limiting the number of parameter combinations to investigate. It then simulates using a combination of these parameters, and finally evaluates the effect and makes a hypothesis. [83,84]. In this study iteration, population, awareness probability, and flight length parameter are used. In addition, the first-order Sobol sensitivity index (SI) is used [85] in many studies. Hence, in this study, the Sobol sensitivity index [86] technique is considered. In case study 1, the following lower bound and upper bound are used and listed in Table 9. Figure 12 shows the effects of different parameters. Figure 12a shows the NPV value with population and iteration for Case study 1. Figure 12a shows that after increasing from 100 iterations and 20 populations to 150 iterations and 30 populations, the NPV value has not changed. Hence, after a certain population number and iteration number, the NPV has not changed. Figure 12b depicts the NPV value with the awareness probability and iteration. Figure 12b shows that after increasing from 100 iterations and an awareness probability of 2, to 150 iterations and an awareness probability of 3, the NPV value has not changed. Figure 12c shows that iteration and population have a significant effect on the NPV value. However, flight length (fl) and awareness probability have less effect on the NPV value.

Case Study 2
In case study 2, the lower and upper bounds are used, which are listed in Table 10. Figure 13 shows the effect of various parameters. Figure 13a shows the NPV value of case study 1 along with the size of the population and the number of iterations. Figure 13a shows that after increasing from 30 iterations and 5 populations to 40 iterations and 9 populations, the NPV value has not changed. Hence, after a certain population number and iteration number, the NPV has not changed. Figure 13b shows the NPV value including the awareness probability and the number of iterations. Figure 13b shows that after increasing from 30 iterations and an awareness probability of 2, to 40 iterations and an awareness probability of 3, the NPV value has not changed. Figure 13c shows that iteration and population have a significant effect on the NPV value. However, the effects of awareness probability and flight length on the NPV value are small.  12. Sensitivity analysis on case study 1(a-c), (a) Population vs. iteration for case study 1, (b) Awareness probability vs. iteration for case study 1 and (c) Sobol sensitivity index for case study 1. In case study 2, the lower and upper bounds are used, which are listed in Table 10. Figure 13 shows the effect of various parameters. Figure 13a shows the NPV value of case study 1 along with the size of the population and the number of iterations. Figure 13a shows that after increasing from 30 iterations and 5 populations to 40 iterations and 9 populations, the NPV value has not changed. Hence, after a certain population number and iteration number, the NPV has not changed. Figure 13b shows the NPV value including the awareness probability and the number of iterations. Figure 13b shows that after increasing from 30 iterations and an awareness probability of 2, to 40 iterations and an awareness probability of 3, the NPV value has not changed. Figure 13c shows that iteration and population have a significant effect on the NPV value. However, the effects of awareness probability and flight length on the NPV value are small.

Advantage and Disadvantage
The no free lunch theorem (NFL) states that no single algorithm can be best for all problems [7]. In this problem, PSO, GA, CSA, and NCSA have several benefits and drawbacks. CSA failed to tackle this multimodal problem since it was unable to avoid premature convergence. Equation 1 is more susceptible to premature convergence as it lacks exploration capacity. However, NCSA has been able to tackle the problem with efficiency and effectiveness. In Table 11, the advantages and disadvantages of these algorithms are presented.

Discussion
Apart from individual advantages and disadvantage, the success of NCSA can be highlighted by the following points: • NCSA can perform better than PSO, GSA, and CSA to tackle highly nonlinear, multimodal optimization problems as NCSA can automatically subdivide its population into subgroups since the niching technique is implemented.

Advantage and Disadvantage
The no free lunch theorem (NFL) states that no single algorithm can be best for all problems [7]. In this problem, PSO, GA, CSA, and NCSA have several benefits and drawbacks. CSA failed to tackle this multimodal problem since it was unable to avoid premature convergence. Equation 1 is more susceptible to premature convergence as it lacks exploration capacity. However, NCSA has been able to tackle the problem with efficiency and effectiveness. In Table 11, the advantages and disadvantages of these algorithms are presented. Table 11. Advantages and disadvantages of discussed techniques.

Techniques
Advantages Disadvantages

NCSA
Higher effectiveness High exploration rate Superior net profit value.
A high standard deviation and low efficiency are observed.

PSO
Less parameter to tune. Simple structure and less dependent on initial points.
Trapped in local optima due to weak local search. A high standard deviation and low efficiency are observed. GSA Low standard deviation High exploitation provides less net profit value.

CSA
Less parameter to tune. Faster convergence. Easy Implementation.
Less effective in nonlinear optimization.
Trapped in local Optima.

Discussion
Apart from individual advantages and disadvantage, the success of NCSA can be highlighted by the following points: • NCSA can perform better than PSO, GSA, and CSA to tackle highly nonlinear, multimodal optimization problems as NCSA can automatically subdivide its population into subgroups since the niching technique is implemented.

•
To avoid premature convergence, such as those in PSO and GSA, the awareness probability parameter keeps NCSA switching between the equations based on the personal best information, or explicit global best.
This study has limitations, as it is primarily focused on developing optimization methods to tackle multimodal optimization. The optimization techniques can be compared with other metaheuristic algorithms in the future. In addition, history matching and uncertainty are not considered in this study for well placement optimization.

Limitations of the Study
This study only addressed well placement optimization. However, in the oil and gas industry, history matching and well control parameters are required to be optimized. Well controls are fixed in this study. However, optimized well control is necessary [87]. Furthermore, deciding the area of oil wells and operational settings (controls, for example, infusion/recuperation rates for heterogeneous supplies) presents extremely troublesome improvement issues and significantly affects underground energy recuperation execution and monetary worth. Well location optimization is an integer based problem. In addition, optimization of the well location is normally conducted first and well control settings are optimized as a fixed well location [88]. Contract theory can be used to jointly optimize the well placement optimization problem and well control optimization problem [49]. Again, a comprehensive sensitivity analysis is important. Besides, only two case studies are used. The space complexity is an important issue, which may require a different set of populations and iteration. However, in the well placement optimization problem, there is no study that has addressed this issue. In the future, a large search space should be considered to find an optimal solution. Sensitivity analysis shows that population and iteration mostly affects the performance. Since search space in the real world can be larger than in case studies, the number of iterations requires tuning. A convergence can be helpful to find the maximum number of iterations. If a sufficient number of iterations is not chosen, the convergence curve will only show an upward trend. The upward trends in the convergence curve show that the algorithm is still searching for global optima. However, if after a certain number iterations or fitness evaluations, the global optima have not changed, it can be assumed that the algorithm has converged and the convergence curve will provide a horizontal line.

Conclusions
In this study, the niching crow search algorithm is implemented for well placement optimization. The proposed technology is also applied to 23 classical benchmark functions. Experimental results suggested that the niching crow search algorithm can find a better solution than popular algorithms. The analysis indicates that the current approach can tackle the multimodal well placement optimization problem. Diversity analysis showed that a higher exploration could be helpful to tackle multimodal optimization problems. The crow search algorithm was unable to acquire good results for the well placement optimization problem. While tackling the multimodal optimization problem is the focus of this study, several statistical criteria are also considered to evaluate the stability and performance of the algorithms. The outcome of the study indicates that the proposed algorithm, in most cases, surpassed the main crow search algorithm. Future optimization studies should focus on combining the location of the well, well control parameters, and history matching. For better approaches, researchers have recently merged quantum computing with current metaheuristic optimization algorithms. Quantum-based optimization techniques were applied in several complex engineering applications using the quantum parallelism mechanism. Hence, to further develop the process, quantum behavior can be introduced in the future. Additionally, we suggest using the global search algorithm with a local search approach because it may have the advantage of solving the well placement optimization problem successfully. For example, the local search technique can be replaced by the firefly algorithm. Recently, lots of research works have used proxy models in recent years to replace actual reservoir simulators, and these models have been found to minimize runtime. The accuracy of this alternative model, therefore, depends on its range of sampling. Future research in this area may focus on improving the reliability of this technique.

Conflicts of Interest:
The authors declare no conflict of interest.