Genetic Algorithm Based on Natural Selection Theory for Optimization Problems

: The metaheuristic genetic algorithm (GA) is based on the natural selection process that falls under the umbrella category of evolutionary algorithms (EA). Genetic algorithms are typically utilized for generating high-quality solutions for search and optimization problems by depending on bio-oriented operators such as selection, crossover, and mutation. However, the GA still su ﬀ ers from some downsides and needs to be improved so as to attain greater control of exploitation and exploration concerning creating a new population and randomness involvement happening in the population at the solution initialization. Furthermore, the mutation is imposed upon the new chromosomes and hence prevents the achievement of an optimal solution. Therefore, this study presents a new GA that is centered on the natural selection theory and it aims to improve the control of exploitation and exploration. The proposed algorithm is called genetic algorithm based on natural selection theory (GABONST). Two assessments of the GABONST are carried out via (i) application of ﬁfteen renowned benchmark test functions and the comparison of the results with the conventional GA, enhanced ameliorated teaching learning-based optimization (EATLBO), Bat and Bee algorithms. (ii) Apply the GABONST in language identiﬁcation (LID) through integrating the GABONST with extreme learning machine (ELM) and named (GABONST-ELM). The ELM is considered as one of the most useful learning models for carrying out classiﬁcations and regression analysis. The generation of results is carried out grounded upon the LID dataset, which is derived from eight separate languages. The GABONST algorithm has the capability of producing good quality solutions and it also has better control of the exploitation and exploration as compared to the conventional GA, EATLBO, Bat, and Bee algorithms in terms of the statistical assessment. Additionally, the obtained results indicate that (GABONST-ELM)-LID has an e ﬀ ective performance with accuracy reaching up to 99.38%.


Introduction
The past few decades have witnessed an increasing interest in using nature-inspired algorithms to solve numerous optimization problems including timetabling problems [1][2][3][4]; data mining [5][6][7]; breast cancer diagnosis [8]; load balancing of tasks in cloud computing [9]; language identification [10,11]; and vehicle routing problems [12][13][14]. The observation of processes found in nature became the basis for nature-inspired algorithms of which the main objective is to seek the global optimal solutions for certain problems [15]. There are two common key factors in nature-inspired algorithms, namely diversification (exploration) and intensification (exploitation). Exploration entails the search for global optima via the random exploration of new solution spaces; meanwhile, exploitation entails the search for local optima in solution spaces that have been previously GA by demonstrating that a large number of problems can basically be solved by utilizing the GA methods. According to [31,32], GA is a widely popular search and optimization method for resolving highlyintricate problems. The success of its methods has been proven in areas involving machine learning approaches. A complete description of the actual coded GA is provided in this section. Figure 1 provides the flowchart of the standard GA. Below is the description of the standard GA procedure [33]: Initial population. This entails the possible solution for set P, i.e., a series of random generations of real values, P = {p 1 , p 2 , . . . , p s }.
Evaluation (calculate the fitness value). The fitness function must be delineated in order to evaluate each chromosome in the population, characterized as fitness = g(P).
Selection. Following the fitness value calculation, the chromosomes are arranged by their fitness values. The selection of parents is then conducted entailing two parents for the crossover and the mutation.
Genetic operators. Once the selection process is complete, the parents' new chromosomes or the offspring (C 1 , C 2 ) are created by utilizing the genetic operators. The new chromosomes (C 1 , C 2 ) are then saved into children population C. This process involves the crossover and mutation operations [34]. The crossover operation is applied to exchange information between two parents, which were selected earlier. Several methods of crossover operators are available such as single-point, two-point, k-point crossover, arithmetical crossover... etc. While in the mutation operation, the genes of the crossed offspring's chromosomes are changed. Likewise, several methods are available for the mutation operator.
Upon completion of the selection, crossover and mutation operations, children population C is completely generated and will be transferred to the subsequent population (P). P is then utilized in the next iteration, whereby the whole process is run again. The iterations will stop if there is convergence of results, or if the number of iterations goes beyond the maximum threshold.
Recently, GA has been broadly used in machine learning, adaptive control, combinatorial optimization, and signal processing areas. GA has a good capability of global search and also it is considered as one of the essential technologies that are associated with the modern intelligent calculation [35]. Additionally, the GA has been implemented in many applications such as face recognition, where GA has been applied to optimize the feature search process [36]. In addition, GA has used for task scheduling to solve the problem of task scheduling for phased array radar (PAR) [37]. In addition, GA has been used in image encryption for analyzing the image encryption security [38]. The GA has also been used in healthcare facilities for formulating an efficient prediction model of stochastic deterioration that combines the latest observed condition in the forecasting process for overcoming uncertainties and the subjectivity, which are related to the currently used methods [39]. Additionally, the GA has been applied in the fuzzy logic in order to find the association rules of the fuzzy logic [40]. The authors in [41] have combined GA with hesitant intuitionistic fuzzy sets to obtain the optimal solution for the decision making. The work in [42] presents an efficient GA-based content distribution scheme to reduce the transmission delay and also to improve the throughput of fog radio access network (F-RAN). The GA was used in [43] to enhance the performance of ELM algorithm by selecting the optimal input weights and applying it in breast cancer detection. Moreover, there are many optimizations and improvements have been done on the GA. For example, in [44][45][46] they present hybrid GAs, [47,48] propose enhancements on the GA in terms of operations (i.e., selection, mutation, and crossover), and the study in [11] worked on separating the population pools (crossover and mutation pools).

Genetic Algorithm Based on Natural Selection Theory (GABONST)
The GABONST was created based on the concept of natural selection theory. Natural selection is a biological theory was first proposed by Charles Darwin [49]. The natural selection theory entails the idea that genes adjust and survive throughout generations with the help of several factors. In other words, the organism with high ability is qualified to survive in the current environment and generates new organisms into the new generation. Whilst the organism with a low ability has two chances to survive in the current environment and avoid extinction: 1: the first chance is getting married to a well-qualified organism (an organism with high ability), which may lead to generating a new high ability offspring into the new generation, and 2: the second chance is the genetic mutation, which might lead to making the organism stronger and able to survive in the current environment. In case the organism, which is obtained from one of the two chances, does not satisfy the environment's requirements, the organism may become extinct over the time. However, it is a mutual impact where the environment affects the organisms and at the same time, the organisms affect the environment. Therefore, over time both the organisms and the environment will obtain changes [50]. Thus, applying the idea of the natural selection theory into GA promises to improve the exploration, exploitation and the solution's diversity of the conventional GA by controlling the search space based on both the organisms and the environment.
This study is simulating the idea of natural selection theory and integrating it into the genetic algorithm. The new proposed algorithm is named as a genetic algorithm based on natural selection theory (GABONST). The procedure of the GABONST is presented in the following steps:

1.
Beginning of the algorithm.

2.
Set number of population n and number of iteration NumIter.

4.
Calculate the fitness value of each chromosome in the population g(S).

5.
Calculate the mean of the fitness values using Equation (1).
In the GABONST, (1) the mean of the fitness values simulates the environment in biological theory. (2) The solution is simulating the organism in biological theory while the fitness value of the solution (g(s i )) is simulating the ability of that organism to survive or not in that environment.

6.
Compare the fitness value of each chromosome g(s i ) with the mean: a. If g(s i ) is less or equal to the mean then implement the mutation operation on the s i and move to the next generation. This represents the right side of the GABONST flowchart (see Figure 2), where the right side simulates the well-qualified organisms (chromosomes) to survive the current environment. b.
Otherwise, the chromosome s i will get two chances to be improved, this represents the left side of the GABONST flowchart (see Figure 2), where the left side simulates the idea of giving the unqualified organisms (chromosomes) two chances to adjust their genes and be qualified to survive the current environment: i. The first chance is through getting married to a well-qualified organism (crossover the weak chromosome (s i ) with a well-qualified chromosome (RS)). If the new chromosome (s i , new ) C , which is obtained by crossover s i and RS, qualifies to survive the current environment (g(s i , new ) C less or equal to the mean) then the (s i , new ) C move to the next generation. Otherwise, go to the second chance, step (ii).
The crossover operation is subject to the boundaries (upper bounds and lower bounds). In case the value of the gene has gone beyond the max (upper bound), then we make it equal to the max (upper bound). While in the case that the value of the gene has gone lower than the min (lower bound), then we make it equal to the min (lower bound).
ii. The second chance is through the genetic mutation (implement the mutation operation to the weak chromosome (s i )). If the new chromosome (s i , new ) M , which is obtained by applying the mutation operation on s i , qualifies to survive the current environment (g(s i , new ) M less or equal to the mean) then the (s i , new ) M move to the next generation. Otherwise, in the case that the organism (chromosome (s i )) has missed both of the chances to be qualified to survive in the current environment then that organism will die (that chromosome (s i ) will be deleted) and a new one comes to life (add a random generated chromosome to the next generation). Figure 3 provides an example of the arithmetic crossover and uniform mutation operations that have been applied in GABONST. Following that, the generation of the new population (S) will be obtained. S is then utilized in the ensuing iteration, whereby the whole process is run again. The process iterations will stop if there is convergence of results or if the number of iterations goes beyond the maximum threshold. The GABONST procedure is illustrated in Algorithm 1.
Set number of population n and number of iteration NumIter. 3.
Initial population. The initial population is a possible chromosome set S, which is a set of real values generated randomly, S = {s 1 , s 2 , . . . , s n }.

4.
Evaluation. A fitness function should be defined to evaluate each chromosome in the population and can be written as fitness = g(S).
Implement mutation operation to s i and move it to the next generation. 13. Else 14.
Select a random chromosome from the top five chromosomes (RS) of the current population and implement the crossover operation on both s i and RS to generate (s i , new ) C ; g(s i , new ) C . 15.
Move the (s i , new ) C to the next generation. 17

Experimental Test One
The measures for evaluating the GABONST is discussed in this section, which compares the GABONST with EATLBO, conventional GA, Bat and Bee algorithms, in terms of certain standard mathematical functions associated with the optimization surface. These algorithms underwent fifteen experiments that applied fifteen distinct objective functions, with 100 iterations and a population size of 50. The most common fifteen objective functions [51] were used to assess the performance of the optimal solution value selection in all the iterations for these algorithms. The optimal solution and dimension for each of the mathematical objective functions (F1-F15) are presented in Table 1 below. Figure 4 depicts the graphical representation of mathematical objective functions. Table 1. Details of the utilized mathematical objective functions.

Objective Function
Dim Range Optimal Solution Table 2 presents the statistical results of the fifteen mathematical objective functions of GABONST, GA, EATLBO, Bat and Bee algorithms following the programme's 50 runs. The most common three statistical evaluation measures were used in this study: root mean square error (RMSE), mean, and standard deviation (STD) [21,52]. The 50 results of each objective function were used to calculate the RMSE, mean, and STD. In Table 2, the values of the GABONST-RMSE and GABONST-STD are lower, which proves the effectiveness of the GABONST in terms of achieving the optimal solution. Meanwhile, the mean is close to the optimal solution (see Tables 1 and 2), which means that the GABONST had generally attained an optimal solution in the fifteen mathematical objective functions throughout the 50 rounds, indicating that the GABONST had performed better in comparison to EATLBO, GA, Bat and Bee algorithms in terms of effectiveness and efficiency. In Table 2, the best results are shown in bold.
Based on the results in Table 2 comparing the GABONST, conventional GA, EATLBO, Bat and Bee algorithms; the GABONST have outperformed the conventional GA, EATLBO, Bat and Bee algorithms on most of the test objective functions. Except F11 and F14, where in F11 both GABONST and EATLBO have achieved the optimal solution while in F14 the conventional GA was slightly better than the GABONST (see Tables 1 and 2). This means the GABONST is concluded to have a better performance than conventional GA, EATLBO, Bat and Bee algorithms. GABONST in this comparison is based on the idea of natural selection theory, which aims to enhance the exploration, exploitation and improve the diversity of the solutions.  Table 1). Comparatively, most of the objective functions' experimental results clearly prove that the GABONST has faster convergence (see Figure 5). This is a result of the good exploitation and exploration offered by the idea of natural selection theory. Figure 5 depicts the comparison results that were obtained from the fifteen objective functions, which compare the GABONST against the conventional GA, EATLBO, Bat and Bee algorithms in a single run. In addition, Figure 5 proves that the reaching ability of the GABONST to the optimal solution is faster and with fewer iterations. Thus, the GABONST will be integrated into the ELM instead of the EATLBO for the purpose of adjusting the input and hidden layer weights.    The achieved results are shown in Figure 5a-o, which are based on the best solution in each iteration obtained from GABONST, GA, EATLBO, Bee and Bat algorithms using F1-F15 during a single run. The optimal solutions of the F1-F15 are provided in Table 1. The results clearly show the superiority of the GABONST over the traditional GA, EATLBO, Bee and Bat algorithms. The GABONST has achieved and reached the optimal solutions faster and with fewer iterations in comparison to the traditional GA, EATLBO, Bee and Bat algorithms.

Experimental Test Two
Additionally, this study also aims to evaluate the impact of the proposed GABONST in an application. Thus, this section will implement and evaluate the GABONST in spoken language identification (LID) by integrating the GABONST into the ELM. According to [53], ELM is a single-hidden layer feedforward neural network (SLFN), which entails the random generation of its hidden layer thresholds and input weights. Due to the fact that its output weights are computed using the least squares method, the ELM exhibits speedy training and testing functions. However, its training goal achievement and global minimum requirement are not guaranteed by the randomly generated input weights and hidden layer thresholds, indicating that both are not the best parameters for usage. Many studies have proven that the SLFN's weight optimization as trained by the ELM using various methods is problematical. Several studies have attempted to carry out weights optimization using metaheuristic searching methods [54][55][56][57], one of which is the enhanced self-adjusting extreme learning machine (ESA-ELM) [29], which utilizes the teaching and learning phases under the framework of the enhanced ameliorated teaching learning-based optimization (EATLBO). However, the approach of optimization EATLBO is still suffering from several disadvantages such as criteria of selection and the capability of generating good fresh solutions. This could result in an incomplete optimization or a slow rate of convergence, which could not often assure achieving the optimum solution. Therefore, the aim of this study is to enhance the ELM algorithm by integrating the new proposed GABONST into the ELM instead of EATLBO and then implemented into the spoken language identification (LID). Finally, this study aims to prove the capacity of the newly offered GABONST optimization algorithm in enhancing ELM's efficiency and effectiveness as a classifier model for LID.

Basic ELM
One study [53] proposed the training of SLFN by utilizing the initial ELM algorithm. The ELM's main concepts are made up of the random bias generation and the hidden layer weights. The calculation of the output weights was done using the least squares solution as delineated by the hidden layer and the targets' outputs. Figure 6 details the main idea of the ELM structure. The next subsection briefly explains about the ELM. Table 3 shows the ELM's description along with its notations. Figure 6. Diagram of the ELM [58]. Table 3. Extreme learning machine's (ELM's) notation table [29].

Notations Implications
J = 1, . . . , N. L = number of hidden neurons; and g(x) = activation function of the standard of SLFNs can be N samples without error.
The equation below is attainable by using the abovementioned equations for N [53]: where: Based on [53], H entails the neural network (NN) hidden layer's output matrix; in H, the ith column represents the ith hidden layer nodes on the input nodes. If the preferred number of hidden nodes is L ≤ N, then the activation function g can substantially be differentiated. Equation (4) then develops into a linear system. The output weights β can be systematically determined by recognizing the least squares solution as below: where H † is the Moore-Penrose generalized inverse of H. Hence, the calculation of the output weights is based on a mathematical conversion minus a prolonged training phase. Iterative adjustments are conducted on the network parameters with several appropriate learning parameters such as learning rate and iterations. Without an explicit approach for determining the input-hidden layer weights, the ELM is subjected to local minima, i.e., no method can ensure the usability of the trained ELM in performing the classification. This weakness can be overcome by integrating the ELM with an optimized approach in which the optimal weights are identifiable thus leading to the attainment of the ELM's best performance. The next subsection presents the genetic algorithm based on natural selection theory-extreme learning algorithm (GABONST-ELM) after adopting the GABONST as an optimization approach into the ELM.

GABONST-ELM
The GABONST-ELM is based on the GABONST, which we have described in Section 2.2. GABONST-ELM uses the idea of the natural selection theory along with the GA whereby the processes of selection, crossover and mutation are used to adjust input weight values and hidden nodes bias. Table 4 summarizes the ELM and GABONST parameter values used in the experiments of this study, along with the GABONST-ELM description. Random definitions of the input weights values and hidden nodes are carried out at the onset of the GABONST-ELM and of which were regarded as chromosomes.
where β: output weight matrix y j : true value N: number of training samples The procedure of the GABONST-ELM is explained in the following steps: Firstly, the target function fitness value is calculated for each chromosome C in the population. The fitness value f(C i ) of each C is calculated in order to evaluate C against the mean.

Secondly, the mean of the fitness values is calculated: Mean
. The mean of the fitness values is calculated in order to simulate the environment in biological theory.
Thirdly, compare each chromosome's fitness value with the mean value. If that chromosome's fitness value is equal to or less than the mean then the uniform mutation operation is implemented on that chromosome and moves it into the new generation. This simulates the well-qualified organisms' (chromosomes') survival of the current environment. If that chromosome's fitness value greater than the mean then that chromosome will obtain two chances to be improved. This simulates the idea of giving the unqualified organisms (chromosomes) two chances to adjust their genes and become qualified to survive the current environment: A. The arithmetical crossover operation is used for exchanging information between that chromosome and a randomly selected chromosome from the top five chromosomes of the current population.
The new offspring will be compared to the mean: If it is equal to or less than the mean then move the new offspring into the new generation. If it is greater than the mean then implement step B. B. The uniform mutation operation is applied to change the genes of that chromosome and generate a new chromosome. The new chromosome will be compared to the mean: if it is equal to or less than the mean then move it into the new generation. If it is greater than the mean then delete that chromosome and add a randomly generated chromosome.
Upon the generation of the new population, the subsequent iteration resumes using this new population, and the whole procedure is reiterated. This iterative process can be stopped when the number of iterations exceeds the maximum limit. The GABONST optimization results are utilized as the input weights and the hidden layer biases of ELM, computing the hidden layers' output matrix H using the activation function g(x). Additionally, the output weights β are calculated using Equation (5) whilst the predicting ELM model is saved for testing. Figure 7 depicts the flowchart of the GABONST-ELM.

LID Dataset
This study has used the exact same dataset of the benchmark [29]. Eight spoken languages namely, (1) English, (2) Malay, (3) Arabic, (4) Urdu, (5) German, (6) Spanish, (7) French and (8) Persian were chosen and verified for the purpose of recognition. Audio files were recorded for each language from its respective country's broadcasting media channel as Table 5 provides. Table 5. List of the media channels [29].

No
Chanel Language A total of 15 utterances were recorded for each language, with each utterance lasting 30 s. Training utilized about 67% of the total dataset, which is equal to 80 utterances, whilst testing utilized the remaining 33%, which is equal to 40 utterances [29]. The recording of the audio files was taken from the channels listed above, whereby each one of the dataset represents one language to determine the algorithm's robustness.
The recording of the utterances was carried out using mp3 together with a dual channel. MATLAB was used as an array entailing two columns that are much alike, despite only one being used. The uttered term corresponded to one vector of the data sampled from the audio file. The length of each utterance was 30 s and each one needed to be sampled and quantified: 1.
The sampling rate is 44,100 Hz, based on the Nyquist frequency the highest frequency was 22,050 Hz. The length of 30 s utterance was approximately 1,323,000 (44,100 * 30) samples.

2.
Quantization: represents real-valued numbers as integers of a 16-bit range (values from −32,768 to 32,767). The following is a depiction of the utilized dataset: a. Name and extension of the dataset: iVectors.mat; b.
Dimension of the dataset is depicted in Table 6; Table 6. Dataset dimension [29]. c. Depiction of the class is shown in Table 7; Table 7. Depiction of the class [29].

No
Class Name Utterance Number d.

Evaluation of the Different Learning Model Parameters
This study used [59] as the basis for the evaluation where numerous measures were applied. [59] handled the classifier evaluation issue and offered effective measures to resolve it. The supervised machine learning offers several evaluation methods for the performance assessment of the learning algorithms and classifiers. Hence, measures concerning classification quality were created in this study based on a confusion matrix that records recognized examples for each class based on their correction rate. The confusion matrix is one of the most common performance measurement techniques for machine learning classification. Each row of the confusion matrix represents the instances in a predicted class, while each column represents the instances in an actual class [59].
The formulated datasets underwent several classification experiments entailing both the ESA-ELM [29] benchmark and the GABONST-ELM, with a varied amount of hidden neurones ranging from 650-900 and a 25-step increment (following the benchmark scenario [29]). Consequently, there were a total of 11 experiments for the ESA-ELM benchmark and the GABONST-ELM, with 100 iterations for each of the tests.
The ESA-ELM (benchmark) and the GABONST-ELM were hence evaluated using several measures that are based on the ground truth, i.e., utilizing the model for predicting the outcome on the evaluation dataset or held-out data, and comparing that prediction with the real outcome. The evaluation measures were also implemented in the comparison of the benchmark with the GABONST-ELM to determine the false positive, true positive, false negative, true negative, accuracy, recall, precision, G-mean and F-measure. Equations (8)- (12) [29] present the evaluation measures used in this study.
G − Mean = 2 tp p × tn n (12) where fp = false positive, tp = true positive, fn = false negative, and tn = true negative. The evaluation of both approaches, ESA-ELM and GABONST-ELM, was based on the same dataset and features the extraction approach with the benchmark [29]. The results for all the experiments carried out between the ESA-ELM and the GABONST-ELM are shown in the figures below. The GABONST-ELM displayed hidden neurones accuracy in the range of 650-900, which was higher than that recorded by the ESA-ELM benchmark, indicating that the performance results of GABONST-ELM in all the iterations are more superior to that of the ESA-ELM benchmark. The comparison of results between both methods in terms of accuracy, precision, recall, F-measure and G-mean are presented in Figures 8-12. The GABONST-ELM achieved the highest accuracy with 725, 800-875 neurons whilst the ESA-ELM achieved an accuracy of 875 neurons (see Figure 8). The GABONST-ELM achieved 99.38% accuracy whilst the ESA-ELM achieved a slightly lower accuracy at 96.25%. The outcomes for ESA-ELM for the other measures are precision 85.00%, recall 85.00%, G-mean 73.41%, and F-measure 85.00%. Meanwhile, GABONST-ELM recorded higher results for all the other measures i.e., recall 97.50%, precision 97.50%, F-measure 97.50% and G-mean 95.06%. The following Tables 9 and 10 present all the results of the evaluation measures for both the ESA-ELM and GABONST-ELM:      Moreover, other experiments were conducted utilizing the i-vector features and the neural network (NN) classifier. "Adam" optimizer and rectified linear unit (ReLU) activation function have been used in the NN. The NN was implemented in LID based on the exact same benchmark's dataset (see Section 3.2.3) with variation of the hidden neuron numbers in the range of 650-900 with an increment step of 25. Table 11 provides all results of the NN during all experiments. Additionally, several experiments were performed based on the benchmark's dataset (see Section 3.2.3) for the basic ELM and fast learning network (FLN) with varying numbers of hidden neurons within the range of 650-900 with an increment of 25. Tables 12 and 13 provide the experiment results of the basic ELM and FLN. The highest performance of the basic ELM was achieved with 875 neurons, and the achieved accuracy was 89.38%. The results of other evaluation measures were 57.50%, 57.50%, 57.50% and 40.53% for F-measure, precision, recall, and G-mean, respectively. The highest performance of the FLN was achieved with 725 neurons, and the achieved accuracy was 92.50%. The results of other evaluation measures were 70.00%, 70.00%, 70.00%, and 53.44% for F-measure, precision, recall, and G-mean, respectively. In addition, several experiments were conducted based on the benchmark's dataset (see Section 3. This finding confirms that generating suitable biases and weights for the ELM with single hidden layer reduces classification errors. Avoiding unsuitable biases and weights prevents the ELM from becoming stuck in the local maxima of biases and weights. Therefore, the performance of GABONST-ELM is very impressive, with an accuracy of 99.38%.

Conclusions
In this study, we proposed the new GABONST based on the existing genetic algorithm (GA) for the optimization problem. GABONST has the same concept as the conventional GA, which imitated the biological structure of the natural world based on Darwin's principles that are made up of three operations, i.e., selection, crossover, and mutation. The GABONST enhanced the conventional GA based on the idea of natural selection theory. It is worth mentioning that all the experiments have been implemented in MATLAB programming language. Based on the algorithm implementation and its results in fifteen different standard test objective functions, the algorithm has shown to be more effective than the conventional GA. This algorithm is primarily advantageous due to its focus on the search space's better area, which resulted from a good exploration-exploitation balance. The good exploration results from the idea of (i) giving the chromosomes that do not satisfy the mean two chances to be improved via the crossover and mutation operations. (ii) By deleting the chromosomes that obtained the two chances and still do not satisfy the mean and adding a randomly generated chromosome. Whilst the good exploitation results from using the idea of the mean that intensifies the search space in the best region. Such an advantage allows for the algorithm's achievement of better convergence. The GABONST is proven to have a better performance than the conventional GA and EATLBO based on the statistical analysis. Additionally, the GABONST-ELM outperformed the ESA-ELM in LID through adopting the GABONST into the ELM instead of EATLBO. Following this study, the plan is to investigate new alternative selection criteria and integrate them into the concept of choosing a chromosome for the crossover operation instead of random selection criteria and apply it on several possible applications.