A Novel Neural Network Training Algorithm for the Identiﬁcation of Nonlinear Static Systems: Artiﬁcial Bee Colony Algorithm Based on Effective Scout Bee Stage

: In this study, a neural network-based approach is proposed for the identification of nonlinear static systems. A variant called ABCES (ABC Based on Effective Scout Bee Stage) is introduced for neural network training. Two important changes are carried out with ABCES. The first is an update of “limit” control parameters. In ABC algorithm, “limit” value is fixed. It is adaptively adjusted according to number of iterations in ABCES. In this way, the efficiency of the scout bee stage is increased. Secondly, a new solution-generating mechanism for the scout bee stage is proposed. In ABC algorithm, new solutions are created randomly. It is aimed at developing previous solutions in the scout bee stage of ABCES. The performance of ABCES is analyzed on two different problem groups. First, its performance is evaluated on 13 numerical benchmark test problems. The results are compared with ABC, GA, PSO and DE. Next, the neural network is trained by ABCES to identify nonlinear static systems. 6 nonlinear static test problems are used. The performance of ABCES in neural network training is compared with ABC, PSO and HS. The results show that ABCES is generally effective in the identification of nonlinear static systems based on neural networks.


Introduction
One of the most important issues of artificial intelligence is heuristic optimization algorithms. They are used to solve many real-world problems and provide many advantages. Therefore, the number of heuristic optimization algorithms has increased recently. Heuristic optimization algorithms are divided into different classes according to the source of inspiration such as swarm intelligence, bio-inspired, physics and chemistry-based, and other algorithms [1]. ABC algorithm is one of the most popular heuristic algorithms based on swarm intelligence. It is used to solve many problems in different areas.
One of the other important usage areas of ABC algorithm is artificial neural network (ANN) training. ANN produces output values by using input values. It can learn with samples. The learning process continues according to a tolerance value. The information obtained as a result of learning is stored in weights. In this way, using weights can produce suitable results in the face of similar situations. This is a very important advantage of ANN. It is one of the main reasons for choosing ANN within the scope of this study.
In our daily life, we encounter nonlinear systems in all areas. Some exhibit static, others dynamic behavior. In fact, this is why the identification of nonlinear systems is important. When the literature is examined, studies have been carried out on nonlinear dynamic systems generally in system identification. In other words, nonlinear static systems are ignored. Nonlinear dynamic systems are affected by previous times. Time series is Some studies related to ABC algorithm are presented in this section. Horng [7] suggested a max entropy thresholding (MET) approach based on ABC algorithm for image segmentation. The study compared the results obtained with four different methods: PSO, hybrid cooperative-comprehensive learning-based PSO algorithm (HCOCLPSO), Fast Otsu's method and honey-bee mating optimization (HBMO). Karaboga [8] designed digital infinite impulse response (IIR) filters by using ABC algorithm. Yeh and Hsieh [9] solved reliability redundancy allocation problem by using a variant of ABC algorithm and the results were compared with different methods in the literature. Hemamalini and Simon [10] used ABC algorithm for economic load dispatch problem. Hong [11] proposed a model to predict electric load based on support vector regression (SVR) and ABC algorithm.Şahin [12] used GA and ABC to maximize the thermal performance of a solar air collector. Zaman et al. [13] proposed a method based on ABC algorithm for synthesizing antenna arrays. Deng [14] used ABC algorithm to classify customers in mobile e-commerce environment. Bulut and Tasgetiren [15] developed a variant of ABC algorithm for economic lot scheduling problem. There are many studies related to ABC algorithm out of these [16][17][18].
Although ABC algorithm's global convergence speed is very good, its different variants have been proposed to increase the speed of local convergence of ABC algorithm. The main purpose here is to improve the performance of ABC algorithm. Bansal et al. [19] proposed adaptive version of ABC algorithm. Here, two important parameters were adjusted according to current fitness values adaptively: step size and "limit" control parameters. Babaeizadeh and Ahmad [20] updated employed, onlooker and scout bee phases in ABC algorithm. Draa and Bouaziz [21] suggested a new ABC algorithm for image contrast enhancement. Karaboga and Gorkemli [22] proposed a variant known as qABC and modified solution generation mechanism belonging to onlooker phase. Gao et al. [23] presented new solution generation mechanisms using more information about the population. Wang [24] made two important updates via generalized opposition-based learning method and local best solution in ABC algorithm. Kıran and Fındık [25] added direction information for each dimension of each food source position to increase the speed of convergence of ABC algorithm. Liang and Lee [26] updated ABC algorithm using different operations and strategies such as elite, solution sharing, instant update, cooperative strategy and population manager strategies. Karaboga and Kaya [4] used arithmetic crossover and adaptive neighborhood radius to improve the performance of ABC algorithm. Different variants of ABC algorithm have been proposed out of these [27][28][29][30].

The Studies on ANN and Neuro-Fuzzy
Due to advantages of ANN, it is seen that it is used successfully in solving many real-world problems [31][32][33][34][35]. Capizzi et al. [36] proposed neural network topology to model surface plasmon polaritons propagation. Sciuto et al. [37] suggested an approach based on a spiking neural network for anaerobic digestion process. Capizzi et al. [38] used a back-propagation neural network (BPNN) for automated oil spill detection by satellite remote sensing. An effective training algorithm should be used to achieve effective results with ANN. Therefore, heuristic algorithms have been used extensively in ANN training recently. ABC algorithm is one of successful heuristic algorithms and it is used in ANN training. Mohmad Hassim and Ghazali [39] suggested an approach for training functional link neural network (FLNN) by using ABC algorithm for time series prediction. They demonstrated that the proposed approach was better than FLNN model based on BP. Zhang et al. [40] suggested a model based on a forward neural network for classifying MR brain image and adjusted with ABC algorithm the parameters of a forward neural network. Ozkan et al. [41] used a model-based neural network and ABC for modeling daily reference evapotranspiration. Chen et al. [42] used an approach based on BPNN and ABC algorithm for prediction of water quality. Karaboga and Ozturk [6] applied ABC algorithm to train FFNNs on pattern classification. The benchmark classification problems were used for performance analysis. Obtained results by using ABC algorithm was compared with the well-known some algorithms. It was reported that training FFNN based on ABC algorithm gave effective results on the related problem. Another usage area of ABC algorithm is ANFIS training. Karaboga and Kaya [2,3] used standard ABC algorithm for adaptive-network-based fuzzy inference system (ANFIS) training for nonlinear dynamic systems identification. In a different study, Karaboga and Kaya [4] proposed a new ANFIS training algorithm called an adaptive and hybrid artificial bee colony algorithm (aABC) to obtain more effective results in the identification of nonlinear dynamic systems. In the next study, Karaboga and Kaya [5] trained ANFIS by using aABC algorithm for nonlinear static systems identification. The performance of aABC algorithm was tested on 5 nonlinear static systems and compared with PSO, GA, HS and ABC algorithm.

Standard ABC Algorithm
The ABC algorithm is one of the popular swarm-based heuristic optimization algorithms. It models the food searching behavior of the honey-bees [43]. It includes three different types of bees named employed bees, onlooker bees and scout bees. There are some assumptions in ABC algorithm. Some of these are: half of the colony consists of employed bees. The other half is the onlooker bees. Specifically, the number of employed bees is equal to the number of onlooker bees. The basic steps of ABC algorithm are as follows: In the initial phase, the positions of food sources are determined randomly by using (1).
Here, x i shows ith solution. i is in range [1, population size]. x min j is the lower value to be taken by parameter j. Also, x max j is the upper value.
Every employed bee probabilistically develops a new food source using the solution positions in memory. (2) is used to create a new solution in this process. Here, k is an integer number in range [1, number of employed bees]. Θ ij is a random number in range [−1, 1].
If the amount of nectar of the new source is higher than before, information belonging to the previous position is deleted from the memory. At the same time, information belonging to the new food source is written to the memory. Otherwise, the previous position is maintained. After the search process is completed, the employed bees share the food source information with the onlooker bees. A onlooker bee evaluates the information of all bees. It selects a source according to the probability value obtained from (3). As in the employed bee stage, a new solution develops by modifying the current solution. And they control the nectar quantity of the candidate solution. If the nectar amount of the candidate solution is better, information of the previous solution is deleted from the memory.
"Limit" is one of the important control parameters of ABC algorithm. If a position is not improved up to the limit value, it is assumed that this food source was abandoned. The abandoned food source is replaced by a new food source by scout bee.

Artificial Bee Colony Algorithm Based on Effective Scout Bee (ABCES)
One of the most important control parameters of ABC algorithm is "limit". Failure counter is the number of failures in producing the solution. When the failure counter reaches "limit" value, a random solution is created instead of the previous solution. This prevents the creation of qualified solutions. In the scout bee phase, instead of creating solutions randomly, it is aimed to be transformed into more qualified individuals. In this study, two major changes have been made in the structure of standard ABC algorithm. The main purpose of these changes is to make the scout bee stage more effective. With this modification, the convergence speed and solution quality of the algorithm are improved.
To make the scout bee stage more efficient, a strategy has been proposed to determine the limit control parameter. In the standard ABC algorithm, limit value is fixed throughout all iterations. This causes to go the scout bee stage less frequently to produce a new solution. limit value is adaptively determined by using (4) to prevent this.
Here, the maximum value of limit value is 1 + D × FoodNumber. maxCycle is the maximum number of iterations, and iter represents the number of current iterations. The limit value is adaptively adjusted according to the number of iterations. Initially, (maxCycle−iter)/ maxCycle gets the maximum value and limit has the greatest value in the same way. In fact, the maximum value of limit is adaptively adjusted here. At each iteration, limit value changes. limit is set according to the value of w. w is a random number in the range [0,1]. It is ensured that the value of the limit is within the range [1, U pper limit ]. U pper limit is found by using (5).
The scout bee stage becomes more effective with the change in limit. In this case, an effective solution-generating mechanism is needed to obtain more qualified solutions. The main purpose here is to continue with a more effective solution than the previous solution. Therefore, the solution-generating mechanism given in (6) is proposed.
Here, x g is the global best solution. r1 is a random number in the range [0,1]. r2 is determined by (7). γ is the arithmetic crossover rate and is calculated randomly. δ is the step size and is accepted as 0.01. Arithmetic crossover is applied between the current solution and the global best solution. In other words, the quality of the current solution is being improved by approximating the global best solution. The value of r2 is randomly generated depending on the number of iterations. Thus, the related solution closes global best solution at first. In this way, the local convergence speed of the algorithm increases. Three different preventions are taken to prevent locally minimal risk. First, the arithmetic crossover rate is randomly selected. Secondly, when not r1 < r2, a new solution is produced in the neighborhood of the current solution. This is achieved via the step size (δ). In particular, in high iterations, outside of global best solution, new solutions are produced according to the current solution. Therefore, quality solutions can even be obtained in high iterations. The third is the possibility of updating the current and global best solution in employed and onlooker bee stages. At the same time, it has been ensured that the new solutions are different from the global best solution with three different preventions.
In summary, adaptive adjustment of limit value is provided. The effectiveness of the scout bee stage is increased with new limit calculation method. A new solution-generating mechanism is proposed for the scout bee stage. In this way, the local convergence speed of the algorithm is increased, and it is provided that better quality solutions are obtained.

Training Feed Forward Artificial Neural Networks
Artificial neural Networks (ANNs) are one of the artificial intelligence techniques. ANNs consist of the interconnection of artificial neurons. Figure 1 shows the general structure of an artificial neuron. An artificial neuron consists of inputs, weights, bias value, activation and transfer function. In this way, an output is obtained from the inputs of neurons. The output of a neuron is calculated using (8). x is the input value. w are the weight values corresponding to the input. b is bias value. f is the activation function. y corresponds to the output of the artificial neuron.
A FFNN consists of 3 layers as input, hidden and output. In FFNN, calculations specified in (8) are performed in each neuron. In this way, each neuron affects the neurons in the next layer. There is no interaction in the same layer. In FFNN, output is obtained corresponding to the input values. This is only possible by creating a model for a related problem. For this, the network needs to be trained. Training the network is the process of determining the weights and bias values. Training algorithms are used for this. One of the learning methods is learning with samples. Training dataset is required for this. It reflects the characteristics of the network and the network learns in the training process. The learning level of the network is related to the error value. Error value refers to the relationship between the real output and predicted output. A low error value is very important for a successful training process. For low error value, an effective training algorithm is required.

Solution of Global Optimization Problems
In applications, 13 numerical test problems are used to analyze the performance of ABCES algorithm. The related problems are given in Table 1. For ABC and ABCES algorithms, population size is taken as 50. The results are obtained for different values of D ∈ {50,100,150,1000}. The number of evaluations has been used 100,000, 500,000, 1,000,000 values. Each application is run 30 times. Each initial population is determined randomly.

Function Formulation
SumSquares Step The results found with ABC and ABCES algorithms in 100,000 evaluations are given in Table 2. 13 test functions are used, and the results are obtained for D = {50, 100, 150} in each function. In addition to the objective function and standard deviation values are given here. When D is 50, 100 and 150, ABCES algorithm has better results than ABC algorithm in SumSquares, Levy, Sphere, Rosenbrock, The Sum of Different Powers, Zakharov, Ackley, Step, Griewank, Rotated Hyper-Ellipsoid, Dixon-Price and Perm. ABC algorithm is only successful in Rastrigin. Apart from average objective function value, ABCES algorithm is more successful in the standard deviation values. This shows that the results obtained by using ABCES in 100,000 evaluations are more robust. The Wilcoxon signed rank test is used to determine the significance of the results and it is given in Table 3. The evaluation is made according to p = 0.05 level. 13 test functions are evaluated in 3 different dimensions (D = 50, 100, 150). Specifically, the significance of 39 results is examined. A significant difference is found in favor of the ABCES algorithm in 34 of these. This result indicates that ABCES algorithm is better in 34 objective function value. There is no significant difference in 4 results. In only one result, there is significant difference indicating that ABC algorithm is more good.  When the results are obtained in 100,000 evaluations are examined, it is seen that fast convergence continues in the problems. Therefore, it is determined that better results can be achieved in high iteration. Thus, the results found in 500,000 evaluations are given in Table 4. In 500,000 evaluations, the quality of the solutions has improved at a high rate according to 100  Table 4 compares ABC and ABCDE algorithms. ABC algorithm is only better in Levy, Step and Dixon-Price functions. In other problems, the ABCES algorithm is more successful than the ABC algorithm. Although ABCES has better results in Levy and Step functions in 100,000 evaluations, this situation has changed in favor of the ABC algorithm in 500,000 evaluations. In Rastrigin function, while ABC algorithm is more successful in 100,000 evaluations, ABCES algorithm is better in 500,000 evaluations. The Wilcoxon signed rank test is performed between ABC and ABCES to determine the significance of the results obtained in 500,000 evaluations and it is given Table 5. The analyses are performed according to p = 0.05 level. The significance of 39 objective function values is examined. In 25 of them, a significant difference is obtained with ABCES algorithm. This result shows that ABCES algorithm is more successful than with ABC algorithm in these functions. ABC algorithm is only better in 8 of them. These results belong to Levy, Step and Dixon-Price functions which ABC algorithm is effective. In the remaining 6 results, no significant difference is found between ABC and ABCES. Despite ABCES is especially better in Rosenbrock function, it is not significant. Also, as in 100,000 evaluations, the best standard deviation values are generally obtained by ABCES algorithm in 500,000 evaluations.  0.000 + It is very important the success that optimization algorithms show in high-dimensional problems. Therefore, the results obtained with ABC and ABCES algorithms are given for D = 1000 on SumSquares, Levy, Sphere, Rosenbrock, The Sum of Different Powers, Zakharov, Ackley, Step, Rastrigin, Griewank, Rotated Hyper-Ellipsoid and Dixon-Price functions in Table 6. ABC algorithm is only better in Rastrigin function. ABCES is more successful in all other problems. In particular, in Rotated Hyper-Ellipsoid function, the objective function value is obtained as 6.40 × 10 8 by ABC algorithm and no effective solution is found. In contrast, it is achieved as 6.25 × 10 2 by using ABCES. Other than that, while the success rate of ABC algorithm on SumSquares, Sphere and The Sum of Different Powers functions is low, more effective results are obtained with ABCES algorithm. The Wilcoxon signed rank test is used to determine whether the results are significant, and it is given in Table 7. The analyses are performed according to p = 0.05 level. The significance status for 12 functions is examined. In 8 of them, a significant difference is found in favor of ABCES. In only one function, a significant difference is obtained with ABC algorithm. No significant difference is found in other functions. In addition, in all functions, the best standard deviation values are achieved by using ABCES. When the results given in Tables 6  and 7 are evaluated, they show that ABCES algorithm is better than ABC algorithm on high-dimensional problems.   [44]. The results are given for population/colony size is 50 and number of evaluations is 500,000. In addition, values below 10 −12 in [44] are assumed as 0 (zero). For fair comparison, values below 10 −12 are accepted as 0 (zero) in ABCES algorithm too. When the related table is analyzed, 0 (zero) are obtained with PSO, DE, ABC and ABCES algorithms in SumSquares, Sphere, Step functions. Algorithms other than GA and ABC reach 0 (zero) value in Zakharov function. Also, ABC and ABCES algorithms find 0 (zero ) value in Ackley function. The best results for Rastrigin, Griewank and Dixon-Price functions are achieved with ABC and ABCES algorithms. In addition, the best results for Rosenbrock and Perm are obtained by using ABCES Algorithm. These results given in Table 8 show that ABCES algorithm is generally more successful than GA, PSO, DE, and ABC algorithm.

Training Neural Networks with ABCES Algorithm for the Identification of Nonlinear Static Systems
In this section, the performance of ABCES algorithm is assessed on neural network training for the identification of nonlinear static systems. In the applications, 6 nonlinear static systems (S 1 , S 2 , S 3 , S 4 , S 5 , S 6 ) given in Table 9 are used. S 1 has one input. S 2 and S 3 consist of two inputs. S 4 and S 5 have three inputs. S 6 has four inputs. Datasets are created using the equations given here. For S 1 , S 2 and S 3 , y output value is obtained by using the input value(s) in the range of [0, 1]. The dataset contains 100 data for the first 3 systems. 80% of the dataset is used for training process and the rest is used for testing. The input values are in the range of [1,6] for S 4 . A dataset consisting of 216 data is created using 6 values for each input. 173 data points of the dataset belong to the training process. The rest are chosen for testing. A dataset with 125 data is created using related equation in S 5 . Input values are in the range of [0,1]. For S 6 , input values are used in the range of [−0.25,0.25] and a dataset consisting of 125 data is created. In S 5 and S 6 , 100 data points are used for the training process. The rest are chosen for testing. According to the dataset index value (i), mod (i, 5) = k operation is applied in all systems. If k = 0, the data is chosen for testing. Otherwise, it is included to dataset of the training process. There are two reasons for applying the mod operation according to 5 value: The first is to choose 80% of the dataset for the training process. In this case, the rest belong to the test dataset. It is ensured that the training dataset covers the whole dataset. This way, a more effective training process is realized. At the same time, the test dataset reflects the whole system. Feed forward neural network (FFNN) is used in this study. Sigmoid function is used for the neurons in the hidden layer and the output layer. Three different network structures are used for each system. 4, 8 and 12 neurons are used in the hidden layer. Training FFNN is realized via ABCES algorithm. Flow chart of FFNN training based on ABCES algorithm for the identification of nonlinear static systems is presented in Figure 2. Before the training, the input and output pairs of the nonlinear static system are normalized in the range of [0,1]. For ABCES algorithm, population size and maximum number of iterations are taken as 20 and 5000, respectively. The number of training and test data used for each system is given in Table 9. MSE (mean squared error) calculated as in (9) is used as error value for training and testing process. Here, n is the number of samples. y i is real output and (ȳ i ) is predicted output. Each application is run 30 times to analyze it statistically. Mean error value (mean) and standard deviation (std) are obtained.
The results obtained with the ABCES algorithm are presented in Table 10. The increase in the number of neurons in the hidden layer in S 1 has increased the solution quality. The best mean error values for training and test are achieved with the 1-12-1 network structure. The number of neurons affects the mean training and test error values in S 2 differently. Although the best mean training error value is found with 2-12-1, the best mean test error value is obtained with 2-8-1. The low number of neurons in S 3 is more effective. The best mean error values for both training and test are achieved with 2-4-1. Close performance is observed in 3-8-1 and 3-12-1 network structures in S 4 . Similarly, the best mean training error values for S 5 are found with 3-8-12 and 3-12-1. However, the best mean test error value is obtained by using 3-4-1. All the best results in S 6 are 4-12-1. When all systems are evaluated in general, it is possible to make four basic comments. First, network structure affects performance. Increasing or decreasing the number of neurons exhibits different behaviors depending on the system. Second, there is a difference between training and test errors. This situation can be explained by the selection of the training and test dataset. Third, generally low standard deviation values are obtained. This situation shows the stability of the solutions. Finally, the low error values found indicate that the ABCES algorithm is successful. In Figure 3, the graphs of the output found with ABCES algorithm and the real output are compared. It is seen that effective output graphics are obtained with ABCES algorithm in all systems. In fact, this is an indication that nonlinear static systems are identified with high accuracy.
It is compared with PSO, HS and ABC algorithm to better evaluate the performance of ABCES algorithm. The results are presented in Table 11. In S 1 , the best mean training and test error values are found by ABCES algorithm. ABC algorithm is more effective after ABCES algorithm. The same is true for S 2 . The best mean training error value in S 3 is found with ABCES. After ABCES, PSO is more effective. Although the best result in the mean test error value is obtained with ABCES, the worst results are found with HS. In S 4 it is clear that ABCES is effective. In S 5 , the best mean training error value is found with ABCES, while the best mean test error value is obtained via PSO. The best results in S 6 are obviously found with ABCES. When the results are evaluated in general, ABCES algorithm is more successful in neural network training than others. After ABCES, the performances are listed as ABC algorithm, PSO and HS, respectively.   Therefore, the convergence graphs of PSO, HS, ABC and ABCES on all systems are compared in Figure 4. It is observed that the convergence of ABCES algorithm is more effective on all systems. These graphics show that ABCES algorithm has better convergence speed than other algorithms. After the ABCES algorithm, the best convergence is achieved with the ABC algorithm, except S 3 and S 5 . PSO has a more effective convergence than the ABC algorithm on S 3 and S 5 .

Discussion
ABCES algorithm generates new solutions by using the information of previous solutions instead of random solution in the scout bee stage. "Limit" value is not fixed and is determined adaptively according to the number of iterations. How these changes affect the performance of ABCES algorithm is examined on two different problem groups: global optimization problems and FFNN training for the identification of nonlinear static systems.
The proposal of a new solution generation mechanism for the scout bee stage has been effective in solving global optimization problems. Many applications are realized in different number of evaluation and different problem dimensions. In these application results, it is observed that ABCES algorithm is generally more effective than ABC algorithm. Especially in high-dimensional problems, the performance of the algorithm has been significantly improved. The occurrence of a clear performance difference between the standard ABC algorithm and ABCES algorithm shows the effect of the scout bee stage and "Limit" control parameter. At the same time, it is seen that ABCES algorithm has more success in general compared to heuristics such as GA, PSO and DE. This is an indication that ABCES algorithm can compete with different heuristic algorithms. ABCES algorithm also finds low standard deviation values parallel to the low error value. This shows that the results are robust.
The identification of nonlinear static systems is one of the difficult problems due to system behavior. The effect of changes on both the scout bee stage and the "limit" control parameter are analyzed on 6 nonlinear static systems. Generally, as the number of neurons in the hidden layer increases, more effective results are obtained. This situation shows that the problem is difficult, and it reveals the necessity of more weight values to explain the relationship. With ABCES algorithm, a performance increase of 50% and above has been achieved in all systems compared to ABC algorithm. The changes in the scout bee stage have increased the convergence speed of ABCES algorithm. ANN training aims to find the closest output to the real output. It is seen from the analyzes that ABCES algorithm is an effective training algorithm in this regard. It is compared with heuristics such as PSO, HS and ABC to better understand the success of ABCES algorithm. The results show that ABCES algorithm is successful in FFNN training.
It is seen that the changes realized on the scout bee stage and limit control parameter with ABCES algorithm positively affect the result. Different solution-generating mechanisms for the scout bee stage can be integrated to further improve the performance of ABCES algorithm. At the same time, different approaches can be put forward to determine "limit" control parameter adaptively.

Conclusions
This paper proposes a neural network-based approach for the identification of nonlinear static systems. A new training algorithm called ABCES (ABC Based on Effective Scout Bee Stage) is introduced to achieve effective results in modeling with artificial neural networks. Standard ABC algorithm basically consists of three stages: employed bee, onlooker bee and scout bee. Employed and onlooker bee stages are more efficient than the scout bee stage. When the scout bee stage is reached, it is understood that better new solution is not developed. In this case, a random solution is created in the scout bee stage of standard ABC algorithm. In fact, this means failure to use of information obtained. If this is prevented, a more effective algorithm will be created. For this purpose, ABCES algorithm is proposed to create a more effective scout bee stage. In this algorithm, two important changes are made according to standard ABC algorithm. First, "limit" control parameter is set to adaptive according to the number of iterations. Secondly, a new solution generation mechanism that enables the adjustment of the new position according to the global best solution in the scout bee stage, is proposed. With these changes, an effective ABCES algorithm has been created. The performance of ABCES algorithm is evaluated on two different problem groups. First, the applications are realized on 13 numerical optimization test problems. It is compared with GA, PSO, DE and ABC algorithms. The Wilcoxon signed rank test is applied to deter-mine the significance of the results. The results show that ABCES algorithm is generally more successful than other algorithms in solving numerical optimization problems.
Secondly, FFNN is trained by using ABCES algorithm for the identification nonlinear static systems. Six nonlinear static systems are used in the applications. The effect of different network structures on performance is examined. The performance of ABCES algorithm is compared with PSO, HS and ABC algorithm in terms of solution quality and speed of convergence. The results show that ABCES algorithm is generally more successful than other algorithms in the identification of nonlinear static systems based on neural networks.
In this study, ABCES algorithm is used first time and it is evaluated on global optimization problems and training FFNN. In future studies, it is possible to examine the performance of ABCES algorithm on different types of problems. As a continuation of this study, FFNN training can be performed by using ABCES algorithm to identify nonlinear dynamic systems. Additionally, neuro-fuzzy models can be trained with ABCES algorithm to identify nonlinear dynamic and static systems. Its performance on neuro-fuzzy training can be evaluated. Apart from system identification, ANN and neuro-fuzzy training can be carried out with ABCES for the solution of real-world problems.