## 1. Introduction

Radial basis function neural network (RBFNN) has been widely used in many fields due to its simpler network structure, faster learning speeds, and better approximation capabilities [

1,

2]. RBFNN is a feed-forward neural network, which was first utilized by Moody and Darken [

3]; they confirmed that the RBFNN has faster learning speed than the multilayer perceptron neural network (MLP). Moreover, RBFNN is simpler than MLP network, which may contain more than three layers of the structure; thus, the process of training in RBFNN is generally faster than MLP [

4,

5]. The difficulty of applying the traditional RBFNN lies in training the network, which should include selecting the proper input variables, the number of hidden neurons, and estimating the parameters (centers, widths) of the RBFNN [

4]. The majority of the traditional RBFNN only focuses on determining the parameters of RBFNN, while leaving the input variables and the number of hidden neurons fixed, and then the trial-and-error method may be adopted to choose the number of hidden neurons [

6]. Moreover, 2 Satisfiability (2SAT) logic will help finding the input data that is used to estimate the parameters of the hidden layer and the logical rule to approximate the number of hidden neurons in RBFNN. In RBFNN training, optimal results can be guaranteed by using the training algorithm to finding the RBFNN output weights while solving the linear output of RBFNN-2SAT in a less time-consuming manner [

2]. In this study, we used different algorithms to train RBFNN-2SATRA.

RBFNN is one of the most popular feed-forward neural networks, which has three layers only (i.e., output, hidden, and input). The value of the neurons moves from the input layer, and passes the hidden layer to the output layer. The basis of including three layers strives for minimizing classification and forecast errors in RBFNN [

1]. The appropriate operation of RBFNN primarily relies on the adequate parameter choice of its basic functions. The simplest approach to train an RBFNN involves assuming fixed radial basis functions, which define the activation of the hidden units. Recently, researchers have begun to train RBFNN by using different methods and algorithms. Yu et al. proposed a new method for training RBFNN called an error correction algorithm. However, the introduced algorithm focused on the hidden neuron parameters. The output weight has not been considered in the feature extraction process by using the proposed algorithm [

7]. Dash et al. used differential evolution (DE) to optimize RBFNN by adaptively controlling the hidden parameter in the hidden layers [

8]. This method, nevertheless, lacks interpretability in the hidden layer of RBFNN [

8]. Yang and Ma [

9] tried to apply the Sparse Neural Network (SNN) algorithm to optimize the hidden neuron number. The SNN core mechanism is to reduce the errors through the trial and error tactic, to identify the hidden neuron number expressly from the crowd of neurons. The SNN system restriction can be observed in the aspect of high computation time in search of the best number of hidden neurons in the computational process. Inspired by various works in [

10,

11,

12], 2-satisfiability (2SAT) representation logic has been used with RBFNN for identifying the relevant parameters as the center. Moreover, 2SAT has been chosen because it can comply with the RBFNN representations and structure. There were several studies that used logic programming as a symbolic rule in RBFNN. The first effort was to implement logic programming in RBFNN by [

13], where they proposed a new model by embedding higher-order logic programming into RBFNN as a single network. The quest for finding the optimal model was continued by Hamadneh et al. [

14]. In their paper, they embedded HornSAT logic programming in RBFNN to improve the performance of RBFNN. The result of the new method showed the HornSAT logic is able to improve the performance of RBFNN. Unfortunately, the proposed RBFNN does not integrate HornSAT to process real dataset. The final classification of the dataset via RBFNN only capitalizes standard classification of RBFNN. No attempt to extract the knowledge from the dataset via logical rule has been done.

Logic mining has been formally introduced by Sathasivam and Abdullah [

15]. In this paper, the proposed logic mining managed to extract the HornSAT logical rule in student’s dataset. One of the limitations of the proposed logic mining is the ability to generalize the induced logical rule that represents the dataset. The development of logic mining has been continued by the work of Kho et al. [

16], where they proposed a 2-satisfiability based reverse analysis method (2SATRA). In 2SATRA, 2SAT represents explicit information of the datasets in terms of trend. The proposed 2SATRA utilizes systematic 2SAT in the Hopfield Neural Network (HNN) by extracting the optimal logical rule in electronic games. The application of 2SATRA was reported in several applications, such as palm oil price extraction [

17], Amazon human resources [

18], medical datasets [

19], and social media analysis [

20]. The proposed 2SATRA managed to achieve acceptable accuracy and generalize the behavior of the dataset. It is worth mentioning that, the induced logical rule is always converged to global minimum energy. Although Hamadneh [

21] has formally introduce logic programming in RBFNN, but there is no attempt to extract the 2SAT logical rule by using RBFNN. Hence, by implementing metaheuristics, such as AIS, the proposed RBFNN is able to reduce the training error that leads to suboptimal induced logic.

Another main ingredient in integrating 2SAT into the reverse analysis method in RBFNN is the traineeship system, which exerts a considerable effect on the RBFNN performance. In this regard, an overabundance of global improvement techniques has been extensively applied to train RBFNN because of its global search aptitude. The algorithm metaheuristics are global improvement techniques—a popularly used algorithm to seek the near-optimal solution for RBFNN [

13,

22]. Numerous, naturally inspired, and latterly developed optimization algorithms include artificial immune systems (AIS) [

1], artificial bee colony (ABC) [

23], particle swarm optimization [

24], differential evolution (DE) [

25], genetic algorithm [

26], etc. Some of these algorithms verified their appropriateness to numerous problems of engineering optimization [

27]. Each algorithm seeks a resolution within a specific solution space via the movement towards the best solution in all the iterations, in most cases.

The theoretical basis for the genetic algorithm (GA) was developed in 1973 by Holland [

26]. Goldberg and Holland [

28] were the first researchers to implement GA in a problem that involved controlling gas pipeline transmission. Other attempts were done by Hamadneh et al. [

21], who utilized GA for training the hybrid RBFNN model with higher-order SAT logic by managing a full-training paradigm. In another study, a genetic algorithm and multiple linear regression approaches were compared to predict temporal scour depth near the circular pier in non-cohesive sediment. The results showed that prediction via utilizing GA is more accurate than that of multiple linear regression [

29]. In up-to-date publications, whereby GA is integrated with RBFNN to develop a reliability analysis method [

30], GA has been used to optimize RBFNN to solve the constrained optimization problem. The results confirmed the robustness, accuracy, and efficiency of GA in RBFNN.

Storn and Price were the first researchers who introduced the DE algorithm as a means for solving numerous problems of global optimization. DE is a flexible algorithm; it is a powerful evolutionary algorithm with the advantages of fast convergence, fewer parameters, and superlative simplicity [

25]. DE has been merged into numerous neural nets, such as feed-forward neural networks [

31] and Hopfield neural net [

32]. Other attempts were done by other scholars to utilize the DE algorithm for training Wavelet Neural Network toward bankruptcy prediction in banks [

33]. The results showed that DE with Wavelet Neural Network outperformed in terms of sensitivity and accuracy. Recently, Tao et al. [

34] have developed a prediction model by integrating the DE algorithm and RBFNN for the coking energy consumption process. The results showed that DE has improved RBFNN in terms of stability and higher accuracy.

PSO was suggested by Kennedy and Eberhart in 1995 as one of the evolutional algorithms [

24]. PSO is inspired by nature and mimics the effect of bird migration behavior [

35]. In another work, Qasem and Shamsuddin [

36] proposed PSO to enhance RBFNN training by optimizing the hidden layer and the output layer’s parameters. Another study by Alexandridis et al. [

37] utilized the PSO algorithm for optimizing the structure of RBFNN. Their model proved competence in solving the function approximations and classification problems by enhancing generalization abilities and accuracy.

ABC is inspired by bee collective behaviors while gathering their honey in an optimum pattern [

23]. ABC is proposed to acquire a computational advantage in optimizing the aptitude of global and local search [

23]. Accordingly, ABC is utilized to train numerous nets, such as RBFNN [

12], Hermite Neural Net [

38], and Hopfield Neural Net [

39]. ABC has been used to estimate the main parameters of RBFNN, such as the centers, width, and output weights by Kurban and Besdok [

40]. Yu and Duan [

41] introduced the hybrid ABC combined with Fuzzy C means Clustering into RBFNN to improve the image fusion accuracy. There are many studies that used the hybrid ABC with RBFNN as a model in many applications [

42,

43], including solving well-known datasets [

44]. Jiang et al. [

45] employed ABC to optimize parameters in RBFNN and projected the ecological pressure. In another improvement, ABC with RBFNN has been used to predict the solubility of CO2 in brine [

46]. The performance analysis confirmed that ABC in RBFNN showed higher accuracy compared to other proposed models.

AIS is inspired by immune systems, which utilize the immunomodulatory properties to develop adaptive systems for accomplishing a wide range of tasks in different research areas, such as supervised classification, intrusion detection, improvement, and aggregation [

47,

48]. In theory, the binary AIS has produced a plethora of works, ranging from combinatorial optimization to real-life applications. In 1996, an AIS was described as a natural immune system [

49]. In 2012, AIS was developed by integrating the affinity-based interaction for AIS with the Tabu search mechanism [

50]. Such a prospect has been expanded by Valarmathy and Ramani [

51] when they introduced a hybrid AIS with RBFNN for improving the classification accuracy of all images of the magnetic resonance. From the perspective of the logic rule in RBFNN, there has not been an extensive study, so far, on optimizing the parameter of RBFNN by using AIS.

From the viewpoint of the satisfiability logic rule in RBFNN, very limited research has been done to utilize the metaheuristics algorithm for optimizing the parameter of RBFNN. Kasihmuddin et al. successfully introduced 2SAT as the best logical rule in an artificial neural network system with different metaheuristic algorithms [

39]. The metaheuristics algorithm is a popular algorithm, which can be used for searching a semi-optimum solution to RBFNN [

52]. The work of Mansor et al. [

53] identified AIS as the best training model in the 3-SAT neural network system compared to another metaheuristics algorithm. This paper aims to examine the impact of AIS on the training phase of the network by constructing RBFNN, integrated with 2SAT. The adopted approach in this work has been inspired by the work of Hamadneh et al. [

14,

21], whereby the emphasis is on establishing an ideal logic model of RBFNN and reverse analysis (RA) by utilizing the comprehensive training process.

In this paper, the hidden neurons, and their parameters in the hidden layer, and the output weight in the output layer of RBFNN, were trained with the help of the idea of 2SAT and the metaheuristics algorithm. Once the RBFNN parameters are fixed by logic programming 2SATRA, the optimum set of output weights and the optimum output of RBFNN-2SATRA can be directly determined by utilizing the metaheuristics algorithm. To the best knowledge of the researchers, none of the existing studies proposed merging 2SATRA in RBFNN with AIS.

Therefore, this study has several contributions. First, this work aims to investigate another perspective in dealing with tacit knowledge, utilizing an explicit training model. Secondly, this study is the first attempt to integrate 2SATRA into the feed-forward neural network; 2SATRA was inserted in RBFNN as an alternative system for extracting information from real data set in the logical symbol form. Third, this work aims to create a modified RBFNN-2SATRA system with AIS to improve the training aspect of RBFNN-2SATRA, as wide-range experiments with numerous performances have been carried out. This is to measure the effect of AIS on RBFNN-2SATRA. Finally, this study aims to propose RBFNN-2SATRAIS to achieve a promising possibility of comparison with other models for all types of datasets.

To evaluate the effectiveness and efficiency of the AIS algorithm, the proposed algorithm has been applied to five popular real benchmark datasets namely: German Credit Dataset, Hepatitis Dataset, Congressional Voting Records Dataset, Car Evaluation Dataset, and Postoperative Patient Dataset, chosen from the University of California, Irvine (UCI) machine learning repository [

54]. The outcomes of the AIS algorithm were then compared with GA, DE, PSO, and ABC.

## 4. 2-Satisfiability Based Reverse Analysis Method (2SATRA) in RBFNN

In this study, 2SAT enhanced the RA method (abbreviated as 2SATRA) [

16] and is proposed to extract the optimum 2SAT logic rule to explain the behavior of the real datasets. In this regard, 2SATRA is a logic mining tool that utilizes RBFNN-2SAT models for extracting the useful logic rule from the dataset. The 2SAT logical rule is utilized to represent and map the datasets due to flexibility and simplicity. Thus, the attributes in the datasets are transformed into a binary form {0, 1}. Specifically, the 2SATRA method extracts the optimal logical rule, which represents the relationship between the attributes of a specific real data set. Accordingly, the hidden information in the data set is extracted to be utilized in classification or prediction. In this study, 2SATRA has been carried out in the RBFNN to describe an intelligence system in doing data mining, and each attribute has been transformed into the atoms inside the clauses. Therefore, six attributes from the datasets were selected to form the 2SAT logical rule. The implementation of the 2SATRA method in the RBFNN networks is demonstrated in the following algorithm:

Step 1: convert all raw dataset to binary, split into a training dataset, and test dataset with the outcome

${P}_{learn}$ (60%) and

${P}_{test}$ (40%) [

16,

18].

Step 2: initialize the input data, width, and center of the neurons, and designation of all the neurons with binary data from Step 1.

Step 3: segregate the collection of two neurons per clause ${L}_{1},{L}_{2},\dots ,{L}_{n}$ that leads to ${P}_{learn}=1$.

Step 4: obtain ${P}_{best}$ by comparing the frequency of the 2SAT clauses in the overall learning dataset.

Step 5: check the output weight of the clauses in the hidden layer of ${P}_{best}$ by using GA, DE, PSO, ABC, and AIS.

Step 6: save the best output weight ${W}_{i}$ of ${P}_{best}$.

Step 7: find the final state of neurons by computing the corresponding output of RBFNN-2SAT according to [

63] as shown below:

where

${w}_{i}$ is the output weights and

$f\left({w}_{i}\right)$ is the RBFNN output value.

where

${\phi}_{i}$ is the activation function of input

${x}_{i}$ in the hidden layer and

${W}_{i}$ is the weight between the input data in the hidden layer and the output data in the output layer.

Step 8: induce all possible 2SAT logic ${P}_{1}^{B},{P}_{2}^{B},\dots ,{P}_{n}^{B}$ from the neuron states.

Step 9: examine all of the induced logic ${P}_{i}^{B}$ by comparing the outcome of ${P}_{i}^{B}$ with ${P}_{test}$.

Step 10: obtain all of the performance evolution and calculation of accuracy.

It should be noted that 2SATRA is a method that utilizes the beneficial feature of RBFNN and 2-satisfiability logic, or RBFNN-2SATRA. Furthermore, 2SATRA is regarded as a feasible approach to help extract the best logical rule, which governs the behavior of the data set [

16].

The complete flowchart of

Figure 2 shows the methodology of this work in steps to train RBFNN-2SATRA.

#### 4.1. Genetic Algorithm in RBFNN-2SATRA

GA was developed in the 1970s as a popular metaheuristic algorithm. Since then, it has been widely implemented to solve numerous optimization problems. The structure of GA can be separated into local searches and global searches [

64] using crossover, selection, and mutation for adaptive and optimization, artificial systems, and other problem-solving strategies [

65]. The implementation of GA in RBFNN-2SATRA is defined as RBFNN-2SATRAGA. The steps involved in RBFNN-2SATRAGA are shown in

Figure 3 as follows:

#### 4.2. Differential Evolution Algorithm in RBFNN-2SATRA

DE is a new evolutionary population-based algorithm that has been typically utilized in numerical optimization [

66]. In DE, each individual (solution) of the population competes with its parents, and the fittest wins [

67]. The implementation of DE in RBFNN-2SATRA is defined as RBFNN-2SATRADE. The algorithm steps in RBFNN-2SATRADE are shown in

Figure 4 as follows:

#### 4.3. Particle Swarm Optimization Algorithm in RBFNN-2SATRA

The PSO algorithm is a popular swarm computation algorithm. It is utilized for solving global optimization in continuous search space. It has been successfully applied to solve different types of real-world optimization problems due to its simplicity in implementation, alongside its remarkable features, such as the presence of flexible free parameters [

68]. The main steps of the procedure in the RBFNN-2SATPSO model are shown in

Figure 5 as follows:

#### 4.4. Artificial Bee Colony Algorithm in RBFNN-2SATRA

The ABC algorithm is inspired by the social behavior of the natural bees. It is utilized to solve numerous optimization problems [

69]. ABC society consists of three swarms called employed bees, scout bees, and onlooker bees that help improve the solution. The algorithm involved in RBFNN-2SATRAABC is shown in

Figure 6 as follows:

#### 4.5. Artificial Immune System Algorithm in RBFNN-2SATRA

In recent years, non-traditional, nature-inspired optimization techniques have been growing in popularity in the combinatorial optimization field. The AIS algorithm is one of these techniques, which is enthused by the human body’s immune system. The AIS algorithm is known as an adaptive system stimulated by the theoretical immunology and observed immune functions, which are applied to complex problem fields [

70]. The AIS algorithm application exists in fields, such as computer network security, biological modeling, virus detection, robotics, data mining, scheduling, classification, and clustering [

53,

70]. The AIS implementation is defined in RBFNN-2SATRA as RBFNN-2SATRAAIS. The algorithm, which is involved in RBFNN-2SATRAAIS, is shown in

Figure 7 as follows:

## 7. Results and Discussion

Based on the experiments, the performance of the training algorithms has been assessed based on a different number of neurons

$6\le NN\le 120$. Five various measurements have been used to assess the RBFNN-2SATRA models with metaheuristic algorithms, including Accuracy and Schwarz Bayesian Criterion (SBC) to assess the prediction accuracy, while Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), and Central Process Unit time (CPU time) showed the structure complexity of RBFNN-2SATRA network based on the rising neuron numbers as shown in the following equation:

RMSE [

10] is a standard error estimator, which has been commonly used in predictions and classifications. During the learning phase, RMSE measured the deviation of the error between the current value

$f\left({w}_{i}\right)$ and

${y}_{i}$ vis-à-vis mean

$\stackrel{-}{f}$. Lower RMSE refers to the better accuracy of our model.

MAE is one of the loss function type of error, which evaluates the straightforward difference between the expected value and the current value. During the learning phase, MAE measured the absolute difference between the current value

$f\left({w}_{i}\right)$ and

${y}_{i}$ [

70]. In addition, the smaller value of MAE refers to the best fitness of the method.

MAPE [

10] measured the size of the error in the form of percentage terms. During the learning phase, MAPE measured the percentage difference between the current value

$f\left({w}_{i}\right)$ and

${y}_{i}$. Then, the lower MAPE leads to better accuracy in terms of percentage for the model.

where

$pa$ is the number of centers, the widths, and the output weights. For SBC values, the lower values are better. When the value of the errors is small, indicate the accuracy is better. The accuracy is defined as follows:

The accuracy will determine the ability of the system for training the dataset. Meanwhile, when the CPU time is lower, the efficiency of the algorithm will be increased.

The results of the RBFNN-2SATRA with GA, DE, PSO, ABC, and AIS are summarized in

Table 3 and

Figure 8,

Figure 9,

Figure 10,

Figure 11,

Figure 12,

Figure 13,

Figure 14,

Figure 15,

Figure 16,

Figure 17,

Figure 18,

Figure 19,

Figure 20,

Figure 21,

Figure 22,

Figure 23,

Figure 24,

Figure 25,

Figure 26 and

Figure 27. Based on the experimental results, the following findings are concluded: (1) the proposed model RBFNN with 2SATRA can receive more input data and can deal with the hidden neuron with a fixed value of the parameters of the hidden layer as width and center. In this situation, the RBFNN-2SATRA with AIS established the best model, which classified datasets based on the logic rule 2SATRA with a minimal value of errors (Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Schwarz Bayesian Criterion (SBC), and Central Processing Unit (CPU) time. (2) The model RBFNN-2SATRAAIS showed the best performance in terms of RMSE, SBC, and CPU time, although the number of neurons increased. The important features of AIS, such as variation, recognition, memory, learning, and self-organizing influenced performance capability. (3) RBFNN-2SATRAAIS showed the best performance in terms of SBC, although the number of neurons increased. According to Hamadneh et al. [

21], the lowest value of SBC indicates that the model can be classified as the best model. (4) In terms of the CPU time, the model RBFNN-2SATRAAIS has been reported as a faster performance than other RBFNN-2SATRA models. When the number of neurons exceeded 40, the possibility for GA, DE, and PSO trapped in trial and error state increased. Trial and error caused GA, DE, and PSO to reach pre-mature convergence. On the other hand, RBFNN-2SATRA with ABC had a relative training error because, during the employed bee phase, the time of the algorithm was wasted without achieving significant improvement. The scout bee phase allowed the algorithm from being trapped at the local minima after a certain count “limit” of unsuccessful improving attempts. Several iterations were required for ABC to produce solutions (output weight) with high quality. These experiments have shown that the AIS algorithm can be successfully applied to train RBFNN-2SATRA due to new generations being formed through cloning. In AIS, the number of the search agents has not been constant and increased due to cloning operations. Even the clone itself moved to the neighboring nodes, which led to fewer iterations required for RBFNN-2SATRAAIS to produce a solution (output weight) with high quality.

The simulation results have authenticated that the AIS algorithm complied efficiently with RBFNN based on 2SATRA in terms of the average value of training, where RMSE rose up to 97.5%, SBC rose up to 99.9%, CPU time by 99.8%, and the average value of testing in MAE rose up to 78.5%, MAPE rose up to 71.4%, and was capable of classifying a higher percentage of 81.6% of the test samples compared to the results of the GA, DE, PSO, and ABC algorithms. These experiments also showed that the AIS algorithm can be strongly applied for training the RBFNN-2SATRA model. Another observation involves the efficacy of AIS, which can be clearly observed when increasing the number of neurons. Furthermore, AIS with RBFNN-2SATRA achieved promising performance based on RSME, MAPE, MAE, SBC, and CPU time. This confirmed that AIS in RBFNN-2SATRA can be utilized in the pursuit of achieving better forecasting results for the 2SATRA logic rule.

## 8. Conclusions

The findings of the study confirmed the significant improvement of the paradigm RBFNN model via utilizing the AIS algorithm in performing 2SATRA to assist the best logical rule, which governs the behavior of the dataset. Upon introducing the new training method utilizing AIS, it has been used to train five recognized datasets, compared with four training algorithms, including ABC, PSO, DE, and GA. To affirm the performance of the proposed algorithm, all algorithms were compared through analytical tests on RBFNN-2SATRA with different numbers of neurons. Based on the results, the analysis, and discussion in this study, the following conclusions can be drawn. AIS showed a faster convergence rate with superior accuracy results. AIS achieved a lower value of RMSE, MAE, MAPE error, a lower value of SBC, and faster Central Process Unit time for training RBFNN-2SATRA. Therefore, AIS proved to be an effective approach to train RBFNN-2SATRA to classify different datasets with a diverse number of features and training samples. AIS also proved to be an effective approach to train RBFNN-2SATRA for classifying various datasets with a varied number of features and training samples. AIS can generally train RBFNN-2SATRA with a differing number of neurons. The simulation results have proven that AIS complied efficiently with RBFNN-2SATRA in relation to terms of the average value of training RMSE rose up to 97.5%, SBC rose up to 99.9%, and CPU time by 99.8%, and the average value of testing in MAE rose up to 78.5%, MAPE rose up to 71.4%, alongside its capability of classifying 81.6% of the test samples, which is a higher percentage, compared to the results of the GA, DE, PSO, and ABC algorithms. The results confirmed that AIS significantly outperformed other contemporary technologies by substantially overwhelmingly large datasets.

For future work, it is recommended that further studies pursue two key aspects. First, the proposed RBFNN-2SATRA can be investigated for other data mining tasks, such as time series prediction and regression. Second, further studies are recommended to examine the efficiency of RBFNN-2SATRAAIS to be utilized in the future to solve traditional optimization methods, such as the N-Queen’s problem and the Traveling Salesman problem.