Artificial Immune System in Doing 2-Satisfiability Based Reverse Analysis Method via a Radial Basis Function Neural Network

: A radial basis function neural network-based 2-satisfiability reverse analysis (RBFNN-2SATRA) primarily depends on adequately obtaining the linear optimal output weights, alongside the lowest iteration error. This study aims to investigate the effectiveness, as well as the capability of the artificial immune system (AIS) algorithm in RBFNN-2SATRA. Moreover, it aims to improve the output linearity to obtain the optimal output weights. In this paper, the artificial immune system (AIS) algorithm will be introduced and implemented to enhance the effectiveness of the connection weights throughout the RBFNN-2SATRA training. To prove that the introduced method functions efficiently, five well-established datasets were solved. Moreover, the use of AIS for the RBFNN-2SATRA training is compared with the genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), and artificial bee colony (ABC) algorithms. In terms of measurements and accuracy, the simulation results showed that the proposed method outperformed in the terms of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Schwarz Bayesian Criterion (SBC), and Central Process Unit time (CPU time). The introduced method outperformed the existing four algorithms in the aspect of robustness, accuracy, and sensitivity throughout the simulation process. Therefore, it has been proven that the proposed AIS algorithm effectively conformed to the RBFNN-2SATRA in relation to (or in terms of) the average value of training of RMSE rose up to 97.5%, SBC rose up to 99.9%, and CPU time by 99.8%. Moreover, the average value of testing in MAE was rose up to 78.5%, MAPE was rose up to 71.4%, and was capable of classifying a higher percentage (81.6%) of the test samples compared Abstract: A radial basis function neural network-based 2-satisfiability reverse analysis (RBFNN-2SATRA) primarily depends on adequately obtaining the linear optimal output weights, alongside the lowest iteration error. This study aims to investigate the effectiveness, as well as the capability of the artificial immune system (AIS) algorithm in RBFNN-2SATRA. Moreover, it aims to improve the output linearity to obtain the optimal output weights. In this paper, the artificial immune system (AIS) algorithm will be introduced and implemented to enhance the effectiveness of the connection weights throughout the RBFNN-2SATRA training. To prove that the introduced method functions efficiently, five well-established datasets were solved. Moreover, the use of AIS for the RBFNN-2SATRA training is compared with the genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), and artificial bee colony (ABC) algorithms. In terms of measurements and accuracy, the simulation results showed that the proposed method outperformed in the terms of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Schwarz Bayesian Criterion (SBC), and Central Process Unit time (CPU time). The introduced method outperformed the existing four algorithms in the aspect of robustness, accuracy, and sensitivity throughout the simulation process. Therefore, it has been proven that the proposed AIS algorithm effectively conformed to the RBFNN-2SATRA in relation to (or in terms of) the average value of training of RMSE rose up to 97.5%, SBC rose up to 99.9%, and CPU time by 99.8%. Moreover, the average value of testing in MAE was rose up to 78.5%, MAPE was rose up to 71.4%, and was capable of classifying a higher percentage (81.6%) of the test samples compared with the results for the GA, DE, PSO, and ABC algorithms. minimal value of errors (Mean Absolute Error (MAE), Mean Absolute (MAPE), Root Mean Square Error (RMSE), Bayesian (SBC), and Central Processing Unit (CPU) time. (2) The model RBFNN-2SATRAAIS showed the best performance in terms of RMSE, SBC, and CPU time, although the number of neurons increased. The important features of AIS, such as variation, recognition, memory, learning, and self-organizing inﬂuenced performance capability. (3) RBFNN-2SATRAAIS showed the best performance in terms of SBC, although the number of neurons increased. et al. the lowest value of SBC indicates that the model can be classiﬁed as the best model. (4) In terms of the CPU time, the model RBFNN-2SATRAAIS has been as a faster performance than other RBFNN-2SATRA models. When the number of neurons exceeded 40, the possibility for GA, DE, and PSO trapped in trial and error state increased. Trial and error caused GA, DE, and PSO to reach pre-mature convergence. On the other hand, RBFNN-2SATRA with ABC had a relative training error because, during the employed bee phase, the time of the algorithm was wasted without achieving signiﬁcant improvement. The scout bee phase allowed the algorithm from being trapped at the local minima after a certain count “limit” of unsuccessful improving attempts. Several iterations were required for ABC to produce solutions (output weight) with high quality. These experiments have shown that the AIS algorithm can be successfully applied to train RBFNN-2SATRA due to new generations being formed through cloning. In AIS, the number of the search agents has not been constant and increased due to cloning operations. Even the clone itself moved to the neighboring nodes, which led to fewer iterations required for RBFNN-2SATRAAIS to produce a solution (output weight) with high quality.


Introduction
Radial basis function neural network (RBFNN) has been widely used in many fields due to its simpler network structure, faster learning speeds, and better approximation capabilities [1,2]. RBFNN is a feed-forward neural network, which was first utilized by Moody and Darken [3]; they confirmed that the RBFNN has faster learning speed than the multilayer perceptron neural network (MLP). Moreover, RBFNN is simpler than MLP network, which may contain more than three layers of the structure; thus, the process of training in RBFNN is generally faster than MLP [4,5]. The difficulty of applying the traditional RBFNN lies in training the network, which should include selecting the proper input variables, the number of hidden neurons, and estimating the parameters (centers, widths) of the RBFNN [4]. The majority of the traditional RBFNN only focuses on determining the parameters has produced a plethora of works, ranging from combinatorial optimization to real-life applications. In 1996, an AIS was described as a natural immune system [49]. In 2012, AIS was developed by integrating the affinity-based interaction for AIS with the Tabu search mechanism [50]. Such a prospect has been expanded by Valarmathy and Ramani [51] when they introduced a hybrid AIS with RBFNN for improving the classification accuracy of all images of the magnetic resonance. From the perspective of the logic rule in RBFNN, there has not been an extensive study, so far, on optimizing the parameter of RBFNN by using AIS.
From the viewpoint of the satisfiability logic rule in RBFNN, very limited research has been done to utilize the metaheuristics algorithm for optimizing the parameter of RBFNN. Kasihmuddin et al. successfully introduced 2SAT as the best logical rule in an artificial neural network system with different metaheuristic algorithms [39]. The metaheuristics algorithm is a popular algorithm, which can be used for searching a semi-optimum solution to RBFNN [52]. The work of Mansor et al. [53] identified AIS as the best training model in the 3-SAT neural network system compared to another metaheuristics algorithm. This paper aims to examine the impact of AIS on the training phase of the network by constructing RBFNN, integrated with 2SAT. The adopted approach in this work has been inspired by the work of Hamadneh et al. [14,21], whereby the emphasis is on establishing an ideal logic model of RBFNN and reverse analysis (RA) by utilizing the comprehensive training process.
In this paper, the hidden neurons, and their parameters in the hidden layer, and the output weight in the output layer of RBFNN, were trained with the help of the idea of 2SAT and the metaheuristics algorithm. Once the RBFNN parameters are fixed by logic programming 2SATRA, the optimum set of output weights and the optimum output of RBFNN-2SATRA can be directly determined by utilizing the metaheuristics algorithm. To the best knowledge of the researchers, none of the existing studies proposed merging 2SATRA in RBFNN with AIS.
Therefore, this study has several contributions. First, this work aims to investigate another perspective in dealing with tacit knowledge, utilizing an explicit training model. Secondly, this study is the first attempt to integrate 2SATRA into the feed-forward neural network; 2SATRA was inserted in RBFNN as an alternative system for extracting information from real data set in the logical symbol form. Third, this work aims to create a modified RBFNN-2SATRA system with AIS to improve the training aspect of RBFNN-2SATRA, as wide-range experiments with numerous performances have been carried out. This is to measure the effect of AIS on RBFNN-2SATRA. Finally, this study aims to propose RBFNN-2SATRAIS to achieve a promising possibility of comparison with other models for all types of datasets.
To evaluate the effectiveness and efficiency of the AIS algorithm, the proposed algorithm has been applied to five popular real benchmark datasets namely: German Credit Dataset, Hepatitis Dataset, Congressional Voting Records Dataset, Car Evaluation Dataset, and Postoperative Patient Dataset, chosen from the University of California, Irvine (UCI) machine learning repository [54]. The outcomes of the AIS algorithm were then compared with GA, DE, PSO, and ABC.

2 Satisfiability Logic Representation
The representation is defined as a logic rule of determining the satisfiability of clause sets, which consist of two literals in each clause [55]. The properties of 2SAT can be summarized as follows: i.
A set of m logical variables, x 1 , x 2 , . . . , x m . Each variable stores a binary value of x i ∈ {1, 0} that exemplify TRUE and FALSE, respectively. ii.
Each variable in x i can be set of literals, where positive literal and negative literal is defined as x m and ¬x m , respectively. iii.
Consisting of a set of n distinct clauses, C 1 , C 2 , . . . , C n . Each C i is connected by logical AND (∧). Every k literals will form a single C i and connected by logical OR (∨). By using property (i) until (iii), the explicit definition of the 2SAT formulation can be defined or P 2SAT : The example of P 2SAT is: The formulation for P 2SAT formulation must be represented in Conjunctive Normal Form (2CNF) because the Satisfiability nature of CNF can be conserved compared to other forms, such as Disjunctive Normal Form (DNF). In this paper, the information of the datasets is represented in the form of attributes. The attributes are defined as variables in P 2SAT and become the symbolic rule for Artificial Neural Network (ANN). In this study, 2SAT is considered as the main driver because the focus of logical programming involves ensuring that the program considers only two letters per item per implementation. It has been proven in previous studies that many of the combinatorial problems can be formulated using the 2SAT logic [56][57][58]. A key reason that makes the 2SAT logic an appropriate approach of representing logical rules in the neural network involves choosing two literals per clause in satisfiability logic, which can reduce the logic complexity of disclosing the relation between the variables in the neural network.

Radial Basis Function Neural Network (RBFNN)
RBFNN is a feed-forward neural network, which was first utilized by Moody and Darken [3]. Compared to other networks, RBFNN has more integrated structure and faster learning speed. In terms of composition, RBFNN involves three layers as the input layer, the hidden layer, and the output layer [59]. According to [60], the Gaussian activation function was chosen due to the differentiability of the function, and capability of establishing the non-linear relationship between the input neuron in the input layer, and the output neuron in the output layer. The following equation shows the Gaussian activation function ϕ i (x) [60]: w ji x j −c i We set w ji = 1 because other values of w ji will result in the biased selection, which leads to weighted 2SAT [4,13,61]. c i is the center and σ i is the width as shown in the following equations: where m is the number of neuron per clause as shown in the logic programming P 2SAT in Equation (1) [10,13], x i is a binary input value for N input neurons, and Euclidean norm space as follows: The final output of RBFNN f (w i ) is given as follows [62]: Processes 2020, 8, 1295 . . , f (w k )) are the RBFNN output value and w i = (w 1 , w 2 , w 3 , . . . , w N ) is the output weight. Figure 1 illustrates the structure of Satisfiability RBFNN. is the output weight. Figure 1 illustrates the structure of Satisfiability RBFNN.  Figure 1 shows the structure of RBFNN in dealing with satisfiability logic programming. The RBFNN process works as follows, firstly, the input neuron gets the input data to enter the network through the input layer. After that, each neuron in the hidden layer calculates the center and the width between the input data and the prototype stored inside it by using the Gaussian activation function that helps to obtain the optimal output weight for the output layer. In this study, we have established a new approach to determine the best RBFNN structure for 2-satisfiability reverse analysis (2 SATRA).

2-Satisfiability Based Reverse Analysis Method (2SATRA) in RBFNN
In this study, 2SAT enhanced the RA method (abbreviated as 2SATRA) [16] and is proposed to extract the optimum 2SAT logic rule to explain the behavior of the real datasets. In this regard, 2SATRA is a logic mining tool that utilizes RBFNN-2SAT models for extracting the useful logic rule from the dataset. The 2SAT logical rule is utilized to represent and map the datasets due to flexibility and simplicity. Thus, the attributes in the datasets are transformed into a binary form {0, 1}. Specifically, the 2SATRA method extracts the optimal logical rule, which represents the relationship between the attributes of a specific real data set. Accordingly, the hidden information in the data set is extracted to be utilized in classification or prediction. In this study, 2SATRA has been carried out in the RBFNN to describe an intelligence system in doing data mining, and each attribute has been transformed into the atoms inside the clauses. Therefore, six attributes from the datasets were selected to form the 2SAT logical rule. The implementation of the 2SATRA method in the RBFNN networks is demonstrated in the following algorithm: Step 1: convert all raw dataset to binary, split into a training dataset, and test dataset with the outcome learn P (60%) and test P (40%) [16,18].
Step 2: initialize the input data, width, and center of the neurons, and designation of all the neurons with binary data from Step 1.
Step 3: segregate the collection of two neurons per clause 1 2 , , ..., n L L L that leads to 1 learn P = .
Step 4: obtain best P by comparing the frequency of the 2SAT clauses in the overall learning dataset.
Step 5: check the output weight of the clauses in the hidden layer of best P by using GA, DE, PSO, ABC, and AIS.  Figure 1 shows the structure of RBFNN in dealing with satisfiability logic programming. The RBFNN process works as follows, firstly, the input neuron gets the input data to enter the network through the input layer. After that, each neuron in the hidden layer calculates the center and the width between the input data and the prototype stored inside it by using the Gaussian activation function that helps to obtain the optimal output weight for the output layer. In this study, we have established a new approach to determine the best RBFNN structure for 2-satisfiability reverse analysis (2 SATRA).

2-Satisfiability Based Reverse Analysis Method (2SATRA) in RBFNN
In this study, 2SAT enhanced the RA method (abbreviated as 2SATRA) [16] and is proposed to extract the optimum 2SAT logic rule to explain the behavior of the real datasets. In this regard, 2SATRA is a logic mining tool that utilizes RBFNN-2SAT models for extracting the useful logic rule from the dataset. The 2SAT logical rule is utilized to represent and map the datasets due to flexibility and simplicity. Thus, the attributes in the datasets are transformed into a binary form {0, 1}. Specifically, the 2SATRA method extracts the optimal logical rule, which represents the relationship between the attributes of a specific real data set. Accordingly, the hidden information in the data set is extracted to be utilized in classification or prediction. In this study, 2SATRA has been carried out in the RBFNN to describe an intelligence system in doing data mining, and each attribute has been transformed into the atoms inside the clauses. Therefore, six attributes from the datasets were selected to form the 2SAT logical rule. The implementation of the 2SATRA method in the RBFNN networks is demonstrated in the following algorithm: Step 1: convert all raw dataset to binary, split into a training dataset, and test dataset with the outcome P learn (60%) and P test (40%) [16,18].
Step 2: initialize the input data, width, and center of the neurons, and designation of all the neurons with binary data from Step 1.
Step 3: segregate the collection of two neurons per clause L 1 , L 2 , . . . , L n that leads to P learn = 1.
Step 4: obtain P best by comparing the frequency of the 2SAT clauses in the overall learning dataset.
Step 5: check the output weight of the clauses in the hidden layer of P best by using GA, DE, PSO, ABC, and AIS.
Step 6: save the best output weight W i of P best .
Step 7: find the final state of neurons by computing the corresponding output of RBFNN-2SAT according to [63] as shown below: where w i is the output weights and f (w i ) is the RBFNN output value.
where ϕ i is the activation function of input x i in the hidden layer and W i is the weight between the input data in the hidden layer and the output data in the output layer.
Step 8: induce all possible 2SAT logic P B 1 , P B 2 , . . . , P B n from the neuron states.
Step 9: examine all of the induced logic P B i by comparing the outcome of P B i with P test .
Step 10: obtain all of the performance evolution and calculation of accuracy. It should be noted that 2SATRA is a method that utilizes the beneficial feature of RBFNN and 2-satisfiability logic, or RBFNN-2SATRA. Furthermore, 2SATRA is regarded as a feasible approach to help extract the best logical rule, which governs the behavior of the data set [16].
The complete flowchart of Figure 2 shows the methodology of this work in steps to train RBFNN-2SATRA.

Genetic Algorithm in RBFNN-2SATRA
GA was developed in the 1970s as a popular metaheuristic algorithm. Since then, it has been widely implemented to solve numerous optimization problems. The structure of GA can be separated into local searches and global searches [64] using crossover, selection, and mutation for adaptive and optimization, artificial systems, and other problem-solving strategies [65]. The implementation of GA in RBFNN-2SATRA is defined as RBFNN-2SATRAGA. The steps involved in RBFNN-2SATRAGA are shown in Figure 3 as follows:

Genetic Algorithm in RBFNN-2SATRA
GA was developed in the 1970s as a popular metaheuristic algorithm. Since then, it has been widely implemented to solve numerous optimization problems. The structure of GA can be separated into local searches and global searches [64] using crossover, selection, and mutation for adaptive and optimization, artificial systems, and other problem-solving strategies [65]. The implementation of GA in RBFNN-2SATRA is defined as RBFNN-2SATRAGA. The steps involved in RBFNN-2SATRAGA are shown in Figure 3 as follows:

Differential Evolution Algorithm in RBFNN-2SATRA
DE is a new evolutionary population-based algorithm that has been typically utilized in numerical optimization [66]. In DE, each individual (solution) of the population competes with its parents, and the fittest wins [67]. The implementation of DE in RBFNN-2SATRA is defined as RBFNN-2SATRADE. The algorithm steps in RBFNN-2SATRADE are shown in Figure 4 as follows: Processes 2020, 8, x FOR PEER REVIEW 10 of 31

Differential Evolution Algorithm in RBFNN-2SATRA
DE is a new evolutionary population-based algorithm that has been typically utilized in numerical optimization [66]. In DE, each individual (solution) of the population competes with its parents, and the fittest wins [67]. The implementation of DE in RBFNN-2SATRA is defined as RBFNN-2SATRADE. The algorithm steps in RBFNN-2SATRADE are shown in Figure 4 as follows:

Particle Swarm Optimization Algorithm in RBFNN-2SATRA
The PSO algorithm is a popular swarm computation algorithm. It is utilized for solving global optimization in continuous search space. It has been successfully applied to solve different types of real-world optimization problems due to its simplicity in implementation, alongside its remarkable features, such as the presence of flexible free parameters [68]. The main steps of the procedure in the RBFNN-2SATPSO model are shown in Figure 5 as follows: The PSO algorithm is a popular swarm computation algorithm. It is utilized for solving global optimization in continuous search space. It has been successfully applied to solve different types of real-world optimization problems due to its simplicity in implementation, alongside its remarkable features, such as the presence of flexible free parameters [68]. The main steps of the procedure in the RBFNN-2SATPSO model are shown in Figure 5 as follows:

Artificial Bee Colony Algorithm in RBFNN-2SATRA
The ABC algorithm is inspired by the social behavior of the natural bees. It is utilized to solve numerous optimization problems [69]. ABC society consists of three swarms called employed bees, scout bees, and onlooker bees that help improve the solution. The algorithm involved in RBFNN-2SATRAABC is shown in Figure 6 as follows:

Artificial Bee Colony Algorithm in RBFNN-2SATRA
The ABC algorithm is inspired by the social behavior of the natural bees. It is utilized to solve numerous optimization problems [69]. ABC society consists of three swarms called employed bees, scout bees, and onlooker bees that help improve the solution. The algorithm involved in RBFNN-2SATRAABC is shown in Figure 6 as follows:

Artificial Immune System Algorithm in RBFNN-2SATRA
In recent years, non-traditional, nature-inspired optimization techniques have been growing in popularity in the combinatorial optimization field. The AIS algorithm is one of these techniques, which is enthused by the human body's immune system. The AIS algorithm is known as an adaptive system stimulated by the theoretical immunology and observed immune functions, which are applied to complex problem fields [70]. The AIS algorithm application exists in fields, such as computer network security, biological modeling, virus detection, robotics, data mining, scheduling,

Artificial Immune System Algorithm in RBFNN-2SATRA
In recent years, non-traditional, nature-inspired optimization techniques have been growing in popularity in the combinatorial optimization field. The AIS algorithm is one of these techniques, which is enthused by the human body's immune system. The AIS algorithm is known as an adaptive system stimulated by the theoretical immunology and observed immune functions, which are applied to complex problem fields [70]. The AIS algorithm application exists in fields, such as computer network security, biological modeling, virus detection, robotics, data mining, scheduling, classification, and clustering [53,70]. The AIS implementation is defined in RBFNN-2SATRA as RBFNN-2SATRAAIS. The algorithm, which is involved in RBFNN-2SATRAAIS, is shown in Figure 7 as follows: Processes 2020, 8, x FOR PEER REVIEW 13 of 31 classification, and clustering [53,70]. The AIS implementation is defined in RBFNN-2SATRA as RBFNN-2SATRAAIS. The algorithm, which is involved in RBFNN-2SATRAAIS, is shown in Figure  7 as follows:

Experimental Setup
The experimental simulation is developed to assess the capacity of the metaheuristic algorithms to train RBFNN in doing 2SATRA. In every dataset, 60% of the data points in the datasets will be utilized for training, and 40% will be used for testing. The k Mean clustering [18] will be used to convert the dataset into binary representation. As for missing data in the dataset, the neuron state will be defined randomly. All of the 2SATRA models were implemented in Microsoft Visual C++ software with Microsoft Windows 7, in 64-bit, with 3.40 GHz processor, 4096 MB RAM, and 500 GB hard drive specification. The use of C++ is to help the user have full control over the memory management. Note that all simulations will be conducted in the same device to avoid any possible biases. The total CPU time for both training and testing is 24 h [71]. If the model exceeds the recommended CPU time threshold, it means the structure of the recommended algorithm does not have the capability to train RBFNN based 2SATRA by using real-life datasets. In terms of choice of activation function, we utilized the Gaussian activation function due to association properties for each radial unit as the center and width. Other activation functions, such as Hyperbolic Activation Function [72], Bipolar Activation Function [73], and McCulloch-Pitts Activation Function [73] are not compatible with the proposed RBFNN due to the non-compatible classification interval. The use of the activation function will result in overfitting nature of the RBFNN. The classification outcome will utilize the same tolerance value, which is Tol = 0.001, proposed by Sathasivam [74]. The choice of the Tol value is to ensure the reduction of the possible statistical error between the target output and the output. In the aspect of the 2SAT logical rule, we only utilize the satisfiable logical rule, where the Min f (w i ) − y i is always zero. The use of other non-satisfiable logic, such as maximum satisfiability [75], is only compatible for P learn = 0. The lists of parameters used in each RBFNN-2SATRA model are summarized in Table 1.

Datasets Description
The evaluation of the proposed AIS algorithm has been performed by utilizing five well-known diverse real datasets chosen from the UCI Repository [54,76,77]. It is widely used as the benchmark dataset by neural network practitioners. Table 2 illustrates these datasets in terms of many features, training samples, and test samples per data set.

Results and Discussion
Based on the experiments, the performance of the training algorithms has been assessed based on a different number of neurons 6 ≤ NN ≤ 120. Five various measurements have been used to assess the RBFNN-2SATRA models with metaheuristic algorithms, including Accuracy and Schwarz Bayesian Criterion (SBC) to assess the prediction accuracy, while Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), and Central Process Unit time (CPU time) showed the structure complexity of RBFNN-2SATRA network based on the rising neuron numbers as shown in the following equation: RMSE [10] is a standard error estimator, which has been commonly used in predictions and classifications. During the learning phase, RMSE measured the deviation of the error between the current value f (w i ) and y i vis-à-vis mean − f . Lower RMSE refers to the better accuracy of our model.
MAE is one of the loss function type of error, which evaluates the straightforward difference between the expected value and the current value. During the learning phase, MAE measured the absolute difference between the current value f (w i ) and y i [70]. In addition, the smaller value of MAE refers to the best fitness of the method.
MAPE [10] measured the size of the error in the form of percentage terms. During the learning phase, MAPE measured the percentage difference between the current value f (w i ) and y i . Then, the lower MAPE leads to better accuracy in terms of percentage for the model.
where pa is the number of centers, the widths, and the output weights. For SBC values, the lower values are better. When the value of the errors is small, indicate the accuracy is better. The accuracy is defined as follows: The accuracy will determine the ability of the system for training the dataset. Meanwhile, when the CPU time is lower, the efficiency of the algorithm will be increased.
The results of the RBFNN-2SATRA with GA, DE, PSO, ABC, and AIS are summarized in Table 3 and (3) RBFNN-2SATRAAIS showed the best performance in terms of SBC, although the number of neurons increased. According to Hamadneh et al. [21], the lowest value of SBC indicates that the model can be classified as the best model. (4) In terms of the CPU time, the model RBFNN-2SATRAAIS has been reported as a faster performance than other RBFNN-2SATRA models. When the number of neurons exceeded 40, the possibility for GA, DE, and PSO trapped in trial and error state increased. Trial and error caused GA, DE, and PSO to reach pre-mature convergence. On the other hand, RBFNN-2SATRA with ABC had a relative training error because, during the employed bee phase, the time of the algorithm was wasted without achieving significant improvement. The scout bee phase allowed the algorithm from being trapped at the local minima after a certain count "limit" of unsuccessful improving attempts. Several iterations were required for ABC to produce solutions (output weight) with high quality. These experiments have shown that the AIS algorithm can be successfully applied to train RBFNN-2SATRA due to new generations being formed through cloning. In AIS, the number of the search agents has not been constant and increased due to cloning operations. Even the clone itself moved to the neighboring nodes, which led to fewer iterations required for RBFNN-2SATRAAIS to produce a solution (output weight) with high quality. ABC to produce solutions (output weight) with high quality. These experiments have shown that the AIS algorithm can be successfully applied to train RBFNN-2SAT due to new generations being formed through cloning. In AIS, the number of the search agents has not been constant and increased due to cloning operations. Even the clone itself moved to the neighboring nodes, which led to fewer iterations required for RBFNN-2SATRAAIS to produce a solution (output weight) with high quality.                                               The simulation results have authenticated that the AIS algorithm complied efficiently with RBFNN based on 2SATRA in terms of the average value of training, where RMSE rose up to 97.5%, SBC rose up to 99.9%, CPU time by 99.8%, and the average value of testing in MAE rose up to 78.5%, MAPE rose up to 71.4%, and was capable of classifying a higher percentage of 81.6% of the test samples compared to the results of the GA, DE, PSO, and ABC algorithms. These experiments also showed that the AIS algorithm can be strongly applied for training the RBFNN-2SATRA model. Another observation involves the efficacy of AIS, which can be clearly observed when increasing the number of neurons. Furthermore, AIS with RBFNN-2SATRA achieved promising performance based on RSME, MAPE, MAE, SBC, and CPU time. This confirmed that AIS in RBFNN-2SATRA can be utilized in the pursuit of achieving better forecasting results for the 2SATRA logic rule.

Conclusions
The findings of the study confirmed the significant improvement of the paradigm RBFNN model via utilizing the AIS algorithm in performing 2SATRA to assist the best logical rule, which governs the behavior of the dataset. Upon introducing the new training method utilizing AIS, it has been used to train five recognized datasets, compared with four training algorithms, including ABC, PSO, DE, and GA. To affirm the performance of the proposed algorithm, all algorithms were compared through analytical tests on RBFNN-2SATRA with different numbers of neurons. Based on the results, the analysis, and discussion in this study, the following conclusions can be drawn. AIS showed a faster convergence rate with superior accuracy results. AIS achieved a lower value of RMSE, MAE, MAPE error, a lower value of SBC, and faster Central Process Unit time for training RBFNN-2SATRA. Therefore, AIS proved to be an effective approach to train RBFNN-2SATRA to classify different datasets with a diverse number of features and training samples. AIS also proved to be an effective approach to train RBFNN-2SATRA for classifying various datasets with a varied number of features and training samples. AIS can generally train RBFNN-2SATRA with a differing number of neurons. The simulation results have proven that AIS complied efficiently with RBFNN-2SATRA in relation to terms of the average value of training RMSE rose up to 97.5%, SBC rose up to 99.9%, and CPU time by 99.8%, and the average value of testing in MAE rose up to 78.5%, MAPE rose up to 71.4%, alongside its capability of classifying 81.6% of the test samples, which is a higher percentage, compared to the results of the GA, DE, PSO, and ABC algorithms. The results confirmed that AIS significantly outperformed other contemporary technologies by substantially overwhelmingly large datasets.
For future work, it is recommended that further studies pursue two key aspects. First, the proposed RBFNN-2SATRA can be investigated for other data mining tasks, such as time series prediction and regression. Second, further studies are recommended to examine the efficiency of RBFNN-2SATRAAIS to be utilized in the future to solve traditional optimization methods, such as the N-Queen's problem and the Traveling Salesman problem.