You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

13 December 2022

Random Maximum 2 Satisfiability Logic in Discrete Hopfield Neural Network Incorporating Improved Election Algorithm

,
,
,
,
,
and
1
School of Mathematical Sciences, Universiti Sains Malaysia—USM, Gelugor 11800, Penang, Malaysia
2
School of Distance Education, Universiti Sains Malaysia—USM, Gelugor 11800, Penang, Malaysia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Symbolic Methods of Machine Learning in Knowledge Discovery and Explainable Artificial Intelligence

Abstract

Real life logical rule is not always satisfiable in nature due to the redundant variable that represents the logical formulation. Thus, the intelligence system must be optimally governed to ensure the system can behave according to non-satisfiable structure that finds practical applications particularly in knowledge discovery tasks. In this paper, we a propose non-satisfiability logical rule that combines two sub-logical rules, namely Maximum 2 Satisfiability and Random 2 Satisfiability, that play a vital role in creating explainable artificial intelligence. Interestingly, the combination will result in the negative logical outcome where the cost function of the proposed logic is always more than zero. The proposed logical rule is implemented into Discrete Hopfield Neural Network by computing the cost function associated with each variable in Random 2 Satisfiability. Since the proposed logical rule is difficult to be optimized during training phase of DHNN, Election Algorithm is implemented to find consistent interpretation that minimizes the cost function of the proposed logical rule. Election Algorithm has become the most popular optimization metaheuristic technique for resolving constraint optimization problems. The fundamental concepts of Election Algorithm are taken from socio-political phenomena which use new and efficient processes to produce the best outcome. The behavior of Random Maximum 2 Satisfiability in Discrete Hopfield Neural Network is investigated based on several performance metrics. The performance is compared between existing conventional methods with Genetic Algorithm and Election Algorithm. The results demonstrate that the proposed Random Maximum 2 Satisfiability can become the symbolic instruction in Discrete Hopfield Neural Network where Election Algorithm has performed as an effective training process of Discrete Hopfield Neural Network compared to Genetic Algorithm and Exhaustive Search.

1. Introduction

The research of artificial neural networks (ANNs) provides interesting ideas in understanding the way brain interprets data and offers near optimal solutions to optimization problems. The ANN model was inspired by the group of biological neurons that was efficiently modeled and fires according to the goal of the whole neuron system. Due to that reason, ANNs have gained attention from various researchers from different backgrounds to solve various potential optimization problems [1,2,3,4,5]. The high demand of the ANN is due to the nature of ANNs that can improve the solution through specific iterations which can be run easily by computer program. One of the earliest ANNs is Hopfield Neural Network (HNN) which was proposed by Hopfield and Tank [6] to provide potential solution for the travelling salesman problem through the connectionist model. HNNs consist of interconnected neurons with input and output neuron without hidden neurons. Each neuron fires and updates iteratively until the final neuron state converges towards near-optimal solution. Interestingly, the neuron state of the HNN can be interpreted in terms of Lyapunov energy function which is always minimized by the network. In this context, HNN will update the neuron state until the network achieves global minimum energy to ensure the optimality of the solution for any given optimization problem. Despite having wide applicability [7,8,9,10], HNNs are prone to storage capacity issues. As proposed by various studies [11,12], the number of stored memory patterns is severely limited (about 14%) which indicates the need for optimal neuron modeling in HNNs.
One of the earliest efforts to represent the neuron in the form of symbolic logic was proposed by Abdullah [13]. In this work, the HNN was viewed as a computational paradigm and symbolic rule rather than a tool to solve optimization problem. Logic was chosen as symbolic rule in HNN because logic conventionally involves a database of declared knowledge, sequential procedure, and resolution, which help HNNs to prove/disprove the goal of the network. The introduction of logic as a symbolic rule in HNNs has attracted other representations of logic. This led to the introduction of the Wan Abdullah method [14] to find the optimal synaptic weight associated with the embedded logical rule. This development has attracted researchers to find other variants of logic to be embedded into HNNs. Kasihmuddin et al. [15] proposed 2 Satisfiability (2SAT) logic in HNNs by creating a cost function that capitalizes the symmetric neuron connection. In this paper, the proposed 2SAT in HNN was effectively optimized using the Estimated Distribution Algorithm during the retrieval phase. As a result, the proposed HNN achieved a high rate of global minima ratio. Next, Sathasivam et al. [16] proposed the first non-systematic logic namely Random 2 Satisfiability (RAN2SAT) by proposing the combination of first and second order clause to the logic formulation. Although the quality of the final neuron state deteriorates as the number of neuron increases, the synaptic weight of the logic shows higher number of variations compared to the other existing logic. Karim et al. [17] proposed the higher order non-systematic logic by introducing a third order clause. This paper shows an interesting logical variant where the first, second, and third order clause were proposed interchangeably. The direction of non-systematic logic was extended by Sidik et al. [18] and Zamri et al. [19] where Weighted Random 2 Satisfiability (r2SAT) were proposed in HNN. To create the correct r2SAT logic, logic phase was introduced to ensure the right amount of negated literal was imposed to RAN2SAT. The proposed r2SAT was reported to obtain final neuron state with high total neuron variation. In another development, Guo et al. [20] combined the beneficial feature of both systematic and non-systematic logic by proposing Y-Type Random 2 Satisfiability (YRAN2SAT). The proposed logic shows an interesting behavior because YRAN2SAT can be reduced to both 2SAT and RAN2SAT. On the other hand, Gao et al. [21] extended the order of the clause in the logic by adding third order clause. Although the final energy for [20,21] tends to converge to local minimum energy (due to high number of neuron), both logics offer a wide range of flexibility to represent symbolic rule in HNN. Despite rapid development in the field of logic in HNN, none of the mentioned studies consider the existence of a redundant variable in the logical rule. In this context, a redundant variable with the opposing literals will usually lead towards non-satisfiable logic.
Maximum Satisfiability (MAXSAT) is another variant of logical rule that is not satisfiable in nature. According to Bonet et al. [22], MAXSAT is to find the interpretation that maximize the number of satisfied clauses. In this context, MAXSAT logic will never be satisfied, and the logical outcome is always False. Kasihmuddin et al. [23] proposed the first non-satisfiability logic namely Maximum 2 Satisfiability (MAX2SAT) in HNN. The cost function of the proposed MAX2SAT only considers the logic that is satisfiable where the synaptic weight for the non-satisfiable logic is zero. The proposed MAX2SAT utilized the exhaustive search and was reported to achieve global minimum energy for lower number of neurons. To reduce learning error during learning phase, Sathasivam et al. [24] proposed genetic algorithm (GA) to find the correct interpretation that leads to zero cost function. The proposed GA was reported to increase the storage capacity and successfully prevent HNN from obtaining sub-optimal synaptic weight. Although the proposed metaheuristics were reported to produce zero learning error, the capability of the algorithm in doing non-systematic MAX2SAT remains unknown. Throughout this process, the real parameters will undergo some adjustments and it take several trials to get a good result. For further developments, in order to find the optimal solution, researchers have suggested distributional robust optimization techniques to solve the non-convex non-linear structure of the high dimensional parameter space of the neural network [25]. Next, some combinatorial problems based on min–max and min–max regret version have been discussed in [26].
Election Algorithm (EA) was initially proposed by Emami and Derakhshan [27] to optimize the solution of the combinatorial problems. EA is a social-political algorithm that was inspired by the presidential election process of the majority in a particular country. The intelligent search of EA can cover a wide range of solutions in a large solution space. Other than that, the mechanism of EA partitions the solution space where effective partitioning will help in reducing complexity and allowing the searching process to be more accurate. EA consists of three-layered optimization operators that will help in improving the solution in every iteration. Researchers have utilized EA for real-life problems where the versatility of EA can cater to both continuous and discrete optimization problems. In terms of RAN2SAT, Sathasivam et al. [28] proposed the first binary EA to optimize the learning phase of the HNN. In this context, several functions in the EA were replaced by the binary operator to fit the fitness function of the HNN. The proposed network was reported to outperform most of the state-of-the-art algorithm in doing RAN2SAT. Next, Bazuhair et al. [29] utilized EA in finding the correct interpretation for higher order RAN3SAT. Similar to the previous study, the proposed EA was reported to achieve almost zero error and maximum total neuron variation for RAN3SAT. This shows the superiority of the EA in reducing the learning complexity of the HNN. However, the performance of the proposed EA in doing non-satisfiable logic remains unknown. In this context, the proposed EA must have the capability to reduce the fitness of the neuron to non-zero cost function. Thus, the contributions of the present paper are as follows:
  • To formulate a novel non-satisfiability logical rule by connecting Random 2 Satisfiability with Maximum 2 Satisfiability into one single formula called Random Maximum 2 Satisfiability. In this context, the logical outcome of the proposed logic is always False and allows the existence of redundant variable. Thus, the goal of the Random Maximum 2 Satisfiability to find the interpretation that maximize the number of satisfied clauses.
  • To implement the proposed Random Maximum 2 Satisfiability into Discrete Hopfield Neural Network by finding the cost function of the sub-logical rule that is satisfiable. Each of the variables will be represented in terms of neurons and the synaptic weight of the neurons can be found by comparing cost function with the Lyapunov energy function.
  • To propose Election Algorithm that consists of several operators such as positive advertisement, negative advertisement, and coalition to optimize the learning phase of the Discrete Hopfield Neural Network. In this context, the proposed EA will be utilized to find interpretation that maximize the number of the satisfied clause.
  • To evaluate the performance of the proposed hybrid network in doing simulated datasets. The hybrid network consisting of Random Maximum 2 Satisfiability, Election Algorithm, and Hopfield Neural Network will be evaluated based on various performance metrics. Note that the performance of the hybrid network will be compared with other state of the art metaheuristics algorithm.
By creating an effective and efficient hybrid network, the proposed network creates a new method to learn non-satisfiable logic which accounts for most of real-life problem. Thus, this paper is organized as follows: Section 2 provides the preliminary explanation on the Random Maximum 2 Satisfiability, how Random Maximum 2 Satisfiability studies in DHNN, Genetic Algorithm, and Election Algorithm. The methods and experimental setup will be given in Section 3. The simulation of the study will be discussed in Section 4. Finally, concluding remarks are given in the final section, Section 5.
Table 1 below shows the list of related research.
Table 1. Summaries of related studies.

3. Methodology

Figure 2 illustrates the general flow of the proposed study to ensure the readers gain a better understanding of this approach. In the training phase, training algorithms such as ES, GA, and EA will be implemented to Random Maximum 2 Satisfiability in DHNN to ensure the correct synaptic weight is obtained. ES operates based on random search to find the solution. However, GA and EA have optimization operators that will help in improving the solution in every iteration. Next, the computation of the local field of the neuron state and the final energy occurs in the testing phase. The differences between final energy and minimum energy are checked whether in the range of tolerance value in order to verify the final energy of the network. The final energy of the proposed model is considered a global minima solution if the differences are less than the tolerance value. Otherwise, it is trapped in the local minima solution.
Figure 2. General flow of the proposed study.

3.1. Performance Metrices

In this section, the performance of DHNN-RANMAX2SATES, DHNN-RANMAX2SATGA, and DHNN-RANMAX2SATEA will be examined by various performance metrics. To examine the actual performance of the network, these performance metrics have been used by several researchers in neural network studies [18,19,20,21]. The purpose of the program is to obtain the best training model of DHNN-RANMAX2SAT.

3.1.1. Root Mean Square Error (RMSE) and Mean Absolute Error (MAE)

RMSE and MAE measure the distance between the predicted value and observes the value of a model. RMSE and MAE are used to measure the accuracy of performance. In general, RMSE represents the data standard deviation of the differences between the target value and observed value. RMSE can be expressed as [19]:
RMSE = i = 1 ε 1 ε ( P i O i ) 2   ,
where P i is the predicted value and O i is the observed value. In this dissertation, RMSE for training error can be formulated as [19]:
RMSE   Training = i = 1 ε 1 ε ( f max f i ) 2   ,
where f max is the total number of DHNN-RANMAX2SAT clause, f i is the fitness of P R M 2 S A T computed by the network and ε is the number of iterations before f i = f max . RMSE for testing error is expressed by
RMSE   Testing = i = 1 ε 1 a b ( G P R M 2 S A T L P R M 2 S A T ) 2   ,
where G P R M 2 S A T is the number of global minimum solution and L P R M 2 S A T is the number of local minimum solution. a is number of combinations and b is number of trials. MAE is defined as the average absolute difference between the predicted value and observed value. The formula of MAE is given by [17]:
MAE = i = 1 ε 1 ε | P i O i |   ,
The MAE training and testing used in this paper are
MAE   Training = i = 1 ε 1 ε | f max f i | 2   ,
MAE   Testing = i = 1 ε 1 a b | G P R M 2 S A T L P R M 2 S A T |   .

3.1.2. Mean Absolute Percentage Error (MAPE)

MAPE measures the size of the error in percentage. The formula of MAPE can be computed as [19]
MAPE = i = 1 ε 100 ε | P i O i | | O i |   ,
MAPE formula for training error is given as:
MAPE   Training = i = 1 ε 100 ε | f max f i | | f i |   ,
and for testing error, the formula of MAPE is
MAPE   Testing = i = 1 ε 100 ε | G P R M 2 S A T L P R M 2 S A T | | a b |   ,

3.1.3. Global Minimum Ratio ( Z m )

Global minimum ratio measures the ratio between total global minimum energy and the total number of runs. Global minimum energy can be obtained if the final energy is within the tolerance value [30]. The value of Z m can be obtained by the following formula [21]:
Z m = 1 a b i = 1 ε G P R M 2 S A T

3.1.4. Jaccard Similarity Index ( JSI )

The final neuron state retrieved by the DHNN-RANMAX2SATES, DHNN-RANMAX2SATGA, and DHNN-RANMAX2SATEA models will be analyzed by using a similarity metric. The similarity metric that will be chosen in this paper is Jaccard similarity index utilized in [15]. The Jaccard similarity index is ratio of the similarity between two distinct datapoints. It also has been used in global evaluation. The Jaccard index for DHNN is as follows:
JSI = l l + m + n
where
l is the number of ( f max , f i ) where both elements have the value of 1
m is the number of ( f max , f i ) where f max is 1 and f i is −1
n is the number of ( f max , f i ) where f max is −1 and f i is 1.

3.2. Baseline Methods

Note that this paper uses simulated data that generate randomly by computer program. The DHNN model is compatible to binary and bipolar representations. This paper utilizing bipolar representation in terms of logical structure that corresponds 1 defines as true and −1 defines as false. Moreover, bipolar neuron states used to evaluate the asynchronous neuron update in the DHNN model [30]. Furthermore, the bipolar representation converges faster than binary representations. This paper does not consider the binary structure that consists of 0 and 1 because the value of 0 that exists in the binary structure can eliminate important parameters. The use of bipolar and binary representative can be differentiated in the computation of finding synaptic weight. The 0 value in binary representation will lead to wrong synaptic weight or delete the synaptic weight. Thus, this helps the proposed model to converge faster.
The effective relaxation method and the activation function in DHNN can improve the stable final state of the neurons. In this paper, the Sathasivam relaxation method is used to retrieve correct neuron states and improve the proposed model. This is because the Sathasivam relaxation method helps neurons to hold or pause before resuming in exchange for information. This method also helps to reduce neuron oscillation and increases the efficiency of the network in finding stable neuron states. Earlier, the conventional model was Wan Abdullah’s logic programming based on McCulloch–Pitts function. However, [30] stated that the results of McCulloch–Pitts function retrieve more local minimum energy and it consumes more time to retrieve global minimum energy. Therefore, hyperbolic tangent activation function (HTAF) is proved to be most stable activation function compared to McCulloch–Pitts and Elliot Symmetric Activation Function in [32]. Hence, HTAF was chosen because of its capability to train squashing neuron states before being classified into final neuron states in this study. Table 2 shows the parameters used for DHNN-RANMAX2SAT, Table 3 and Table 4 show the parameters used in this study.
Table 2. List of parameters used in DHNN model.
Table 3. List of GA parameters used in training phase.
Table 4. List of EA parameters used in training phase.

3.3. Experimental Design

The proposed hybrid networks during training phase and testing phase of DHNN-RANMAX2SAT are DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATGA. All proposed DHNN-RANMAX2SAT models will be implemented in Dev C++ Version 5.11 coding software with a specification of a 3.1 GHz Intel Core i5 processor with 4 GB RAM in the Windows 10 operating system. The simulation will be carried out in only one device to avoid biases. The output is run by Dev C++ coding software and the graph is illustrated by MATLAB.

4. Results and Discussion

In this section, the performance of the three models, DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATGA, will be discussed. Note that the data will be divided into two phases. The first data phase will define the increment of 2SAT of non-redundant variables in Random Maximum 2 Satisfiability where 50 N N 210 and the second data phase will define the increment of 1SAT of non-redundant variables in Random Maximum 2 Satisfiability where 210 < N N 300 . This is important in order to observe the behavior of the 1SAT and 2SAT in Random Maximum 2 Satisfiability. In both phases, the clauses with two redundant variables are included to represent the maximum satisfiable part. The number of neurons is limited to 300 due to threshold time stimulation that is fixed to 24 h. An exhaustive search took more than 24 h in the stimulation for number of neurons more than 300. Therefore, number of neurons is limited to 300 for the Genetic Algorithm and Election Algorithm based on proposed logical structure to maintain the parallelism and produce comparable results. The result will be discussed according to training error, testing error, energy analysis, and similarity analysis.

4.1. Training Error

ES, GA, and EA facilitate the training phase to check clause satisfaction. Algorithm approaches are utilized in this study to alter the parameters of the machine learning model and get an optimized solution. Thus, GA and EA play an important role in explainable artificial intelligence. The synaptic weight management by the proposed models is observed for all logical combinations of P R M 2 S A T in this section. The optimal training phase is defined as the capability of the proposed model to minimize the cost function that can generate the optimal synaptic weight according to the Wan Abdullah (WA) method [13]. In order to achieve minimized cost function, in this study, ES, GA, and EA will be implemented and compared. Figure 3 below illustrates the RMSE training, MAE training, and MAPE training of the proposed model.
Figure 3. (a) RMSE training for DHNN-RANMAX2SAT; (b) MAE training for DHNN-RANMAX2SAT; (c) MAPE training for DHNN-RANMAX2SAT.
(i)
Observe that, as the total number of neurons increases, the value of RMSE training, MAE training, and MAPE training increases for DHNN-RANMAX2SATES, DHNN-RANMAX2SAEA, and DHNN-RANMAX2SAGA. We can also observe that there is a drastic increase in phase 2 compared to phase 1 in the graphs. This is due to the non-systematic logical structure that consists of first order clauses in phase 2 having the chances of getting a satisfied interpretation being low compared to second order clauses in phase 1 which makes the graph increase. Therefore, overall, when the number of neurons increases, the number of satisfied interpretations decrease which makes the training errors increase due to the complexity of the logical structure [16].
(ii)
According to Figure 3, it is noticeable that the highest training error is shown at N N = 300 by DHNN-RANMAX2SATES compared to DHNN-RANMAX2SATEA and DHNN-RANMAX2SATGA. This is due to less stability of the neurons during the training phase and ES derives the wrong synaptic weight. Since ES is operated by a random search method, the complexity to get correct synaptic weight will increase as the number of neurons increases.
(iii)
Observe that as the number of neurons increases, DHNN-RANMAX2SATGA manages to achieve low training error compared to DHNN-RANMAX2SATES. Note that the operator of crossover with crossover rate of 1 in the Genetic Algorithm is able to change the fitness of the population frequently by using the fitness function [19]. Moreover, the mutation rate of 0.01 based on [19] is able to obtain the optimum fitness. Therefore, it is easy for the chromosomes to achieve an optimal cost function to retrieve the correct synaptic weight.
(iv)
However, based on the graph above, DHNN-RANMAX2SATEA outperformed DHNN-RANMAX2SATES and DHNN-RANMAX2SATGA as the number of neurons increased. Lower training error indicates the better accuracy of our model. This is due to proposed metaheuristic in which EA enhanced the training phase of DHNN. DHNN-RANMAX2SAEA is efficient in retrieving global minimum energy due to the global search and local search operators of EA [27]. This indicates that the optimization operators in EA enhanced the training phase of DHNN-RANMAX2SATEA. The highest rate of positive advertisement and negative advertisement chosen based on [27] that quickens the process of obtaining the candidate with maximum fitness. By diving the solution spaces during training phase, the synaptic weight management improved and the proposed model achieves the optimal training phase successfully.

4.2. Testing Error

An optimal testing phase is when the proposed model manages to retrieve the final neuron state that produces a global minimum solution. Good synaptic weight management of the proposed model will result in obtaining a global minimum solution. Therefore, the main focus on analyzing the testing error is to observe the quality of the solution whether the final neuron state produces the global minimum or local minimum solution by Equation (20). Figure 4 demonstrates the performance of DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATEA during the testing phase.
Figure 4. (a) RMSE testing for DHNN-RANMAX2SAT; (b) MAE testing for DHNN-RANMAX2SAT; (c) MAPE testing for DHNN-RANMAX2SAT.
(i)
According to Figure 4, the graphs show similar trends for DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATGA gives a constant graph for both phase 1 and phase 2 as the number of neurons increases. This is due to the logical structure becoming more complex as it contains a greater number of the neuron. In this case, as the number of neurons increases, the logical structure fails to retrieve more final states that lead to global minimum energy.
(ii)
Based on Figure 4, RMSE testing, MAE testing, and MAPE testing of DHNN-RANMAX2SATES increases at 50 N N 210 . ES is a searching algorithm. The training phase could be affected by the nature of ES which will continuously affect the testing phase, thus resulting in a high value of testing error. Wrong synaptic weights retrieved during the testing phase due to the inefficiency of synaptic weight management. The complexity of the network increases when N N > 210 resulting in a constant graph with maximum values of RMSE testing, MAE testing, and MAPE testing. Thus, the DHNN-RANMAX2SATES model starts retrieving the non-optimal states.
(iii)
According to the graphs, the accumulated errors are mostly 0 for DHNN-RANMAX2SATGA for RMSE testing, MAE testing, and MAPE testing. This is due to metaheuristics GA consisting of an optimization operator which can help to improve the solution. It can be deduced that GA barely gets trapped in the local minima solutions. The operators of GA always search for optimal solutions which correspond to the global minimum energy. Moreover, mutation operator in GA reduced the chances for the bit string to retrieve local minima solutions. Thus, this resulted in a zero value of RMSE testing, MAE testing, and MAPE testing as the number of neurons increases.
(iv)
Notice that the graphs of RMSE testing, MAE testing, and MAPE testing of DHNN-RANMAX2SATEA also show a constant graph that achieves zero testing error as the number of neurons increases. Lower errors of RMSE testing, MAE testing, and MAPE testing define the effectiveness of proposed model to generates more global minimum energy. This is due to the effective synaptic weight management during training phase of DHNN-RANMAX2SATEA. The presence of local search and global search operator in EA that divides the solution spaces during training phase is the main reason that improves the synaptic weight management during retrieval phase. This leads DHNN-RANMAX2SATEA to produce global minimum energy in the testing phase.
(v)
Generally, we can say that DHNN-RANMAX2SATEA and DHNN-RANMAX2SATGA outperformed DHNN-RANMAX2SATES in terms of RMSE testing, MAE testing, and MAPE testing. This indicates that ES failed to retrieve optimal synaptic weight during the training phase and consequently affected the testing phase, thus resulting in local minima solution. Meanwhile, GA and EA find more variation of the solution (more global solution). Therefore, DHNN-RANMAX2SATEA and DHNN-RANMAX2SATGA help the network to reduce generating local minimum energy by achieving zero for RMSE testing, MAE testing, and MAPE testing.

4.3. Energy Analysis (Global Minimum Ratio)

The Global Minimum Ratio ( Z m ) produced by DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATGA during the retrieval phase is shown in Figure 5. The amount of global minimum energy produced by the network can determine the efficiency of network. Therefore, if the global minimum ratio of the proposed network is close to 1, most of the solutions in the network reached correct final state during the retrieval phase. In the network, 10,000 bit strings solutions will be produced by each stimulation. For example, 0.9981 global minima ratio value defines 9981 bit strings are global minimum energy and only 19 bit strings are local minimum energy. In [20], it was discussed how the energy produced at the end of the process correlates with the global minima ratio.
Figure 5. Global minimum ratio, Z m produced by DHNN-RANMAX2SAT.
(i)
According to the graph, DHNN-RANMAX2SATES shows a decrease in the graph when 50 N N 210 with Z m almost 0. At this stage, ES is only able to produce much less global minimum energy because most of the solutions are trapped at sub-optimal states. When the number of neurons increases in ES, the network becomes more complex. Thus, the local field is not able to generate the correct state of the neuron as the number of neuron increases. Hence, we can observe a constant graph at Z m = 0 when N N > 210 .
(ii)
However, DHNN-RANMAX2SATGA manages to achieve Z m almost 1 as the total number of neurons increases which indicates that most of the final neuron state in the solution space achieved global minimum energy [19]. The complexity of the searching technique has been reduced by implementing GA. The crossover stage improves the unsatisfied bit string with the highest fitness. The bit strings improved when it achieved the highest fitness as the number of generations increased. Therefore, GA produces many bit strings that achieved global minimum energy compared to the exhaustive search method.
(iii)
Therefore, DHNN-RANMAX2SATEA also manages to achieve Z m almost 1 as the total number of neurons increases. This indicates that DHNN-RANMAX2SATEA manages to obtain stable final neuron states. The reason is due to the capability of DHNN-RANMAX2SATEA in achieving optimal training phase which results in an optimal testing phase where global minimum energy will be produced. Moreover, EA produces a bit string with less complexity by partitioning the solution space into 4 parties. The number of local solutions produced at the end of computation will be reduced by the effective relaxation method by choosing a relaxation rate of 3 [30].
(iv)
Generally, based on the outcomes of DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATGA is able to withstand the complexity up to 300 neurons. It was observed that more than 99% of final state of the neuron in DHNN-RANMAX2SATGA and DHNN-RANMAX2SATEA achieved the global minimum solution. However, it was observed that 0.1% of final state of the neuron in DHNN-RANMAX2SATES achieved the global minimum solution. Therefore, DHNN-RANMAX2SATGA and DHNN-RANMAX2SATEA outperformed DHNN-MAX2SATES as the number of neurons increased in terms of global minimum ratio.

4.4. Similarity Analysis (Jaccard Similarity Index)

Figure 6 shows the JSI produced by DHNN-RANMAX2SATES, DHNN-RANMAX2SATEA, and DHNN-RANMAX2SATGA models. Similarity analysis was performed to analyze the final neuron state by comparing the retrieved neuron state with the benchmark neuron state. The JSI was chosen to investigate the quality of the solutions produced by DHNN-RANMAX2SATES, DHNN-RANMAX2SATGA, and DHNN-RANMAX2SATEA.
Figure 6. Jaccard Similarity Index, JSI produced by the DHNN-RANMAX2SAT.
(i)
Based on Figure 6, DHNN-RANMAX2SATES shows the highest JSI at N N = 50 . This indicates the major deviation and bias in the final states generated. The high value of JSI indicates that the model achieves overfitting as the DHNN-RANMAX2SATES model failed to produce differences in the final states of the neuron. However, there is a decrease in trend from N N = 130 to N N = 170 . The JSI is decreasing, showing that the final neuron state generated is varied as the neuron increased [15]. This is due to the fewer benchmark neurons generated during the retrieval phase by the proposed model.
(ii)
However, Jaccard has stopped getting any value when N N > 210 because all the solutions retrieved by the network are local solutions. This is because the nature of ES that operates based on trial and error could affect the minimization of the cost function. Since ES failed to produce optimal synaptic weight in training phase, it affects the final neuron states produced by the model at the end of computation.
(iii)
According to Figure 6, we can see that the fluctuation for DHNN-RANMAX2SATGA and DHNN-RANMAX2SATEA increased. This is due to the total number of neurons increases. This increases the chances for the neuron to be trapped at the local minima. A higher number of total clauses imply more training error during the training phase which causes less variation of the final solution than the benchmark solution. Thus, this causes the trend of JSI for DHNN-RANMAX2SATGA and DHNN-RANMAX2SATEA to increase.
(iv)
However, DHNN-RANMAX2SATEA has the lowest index value for Jaccard when N N = 90 . In this case, the neuron retrieved from DHNN-RANMAX2SATEA has a lowest similarity with the benchmark state. The higher number of neuron variations produced by the network obtains lower value similarity index. This shows that the network produces less overfitting of the final states of the neuron.

4.5. Statistical Analysis

A Friedman Test was conducted for all DHNN-RANMAX2SAT models based on the results of RMSE Training. The analysis from the Friedman Test provided an insight whether the performance of the DHNN model in terms of RMSE Training is statistically significant or not. Initially, the null hypothesis, H θ is defined, whereby H θ = there is no significance in terms of RMSE Training between all models. The degree of freedom ( d f ) considered is d f = 2 with a significance level of α 0 = 0.05 (95% confidence interval). Subsequently, the attained p-value was 0.000045 with a Chi square value of χ 2 = 20 . By observing that the value of p is much less than α 0 = 0.05 , the H θ is rejected. This implies that the performance of each DHNN-RANMAX2SAT model in the training phase is not equal or statistically significant. Hence, the superiority of DHNN-RANMAX2SATEA as reported in Figure 3a is acknowledged. As DHNN-RANMAX2SATEA achieved the highest rank of 1 as compared to other algorithms, this highlights the importance of implementing an optimal training algorithm to minimize the satisfiable clauses of RANMAX2SAT.

5. Conclusions

One of the significant milestones in AI is to create DHNN that has the ability to learn optimally. This can be done by implementing flexible logic into DHNN. This paper serves as a benchmark to more implementation of non-satisfiability in DHNN. First, this study introduces a new logical rule, namely RANMAX2SAT, by combining two logical formulations, that is, Satisfiable and Non-Satisfiable. Note that, each clause in RANMAX2SAT contains redundant variables and this is the first attempt to introduce non-systematic logic into MAX2SAT (Refer Equation (3)). Second, the proposed RANMAX2SAT was implemented into DHNN or DHNN-RANMAX2SAT as a symbolic rule that governs the connection of the neurons. This can be done by comparing the cost function in Equation (8) with the energy function in Equation (13). It is worth mentioning that the ising spin of the neuron in DHNN-RANMAX2SAT is following the work of [33], where the dynamic is converged to the nearest local minimum energy. Third, the proposed model was optimized by using EA or DHNN-RANMAX2SATEA that was inspired by socio-political metaheuristics. The proposed EA was used to find the interpretation that leads to minimized cost function. In the perspective of Equation (3), the proposed EA will only learn the Satisfiable logic that formulates the whole logical formulation. Again, this is the first introduction of EA in optimizing the learning of DHNN that is not Satisfiable and has nonzero cost function. Finally, the quality of solution of DHNN-RANMAX2SAT model was tested in terms of various performance metrics. According to the experimental results, the proposed DHNN-RANMAX2SATEA outperformed other existing DHNN models in terms root mean square error, mean absolute error, mean absolute percentage error, global minima ratio, and Jaccard similarity analysis. It was observed that most of 99% of the final state of the neurons in DHNN-RANMAX2SATEA achieved global minimum solution. This shows that the proposed DHNN-RANMAX2SATEA managed to achieve the optimal training and testing phase which indicates the possibility of the RANMAX2SAT becoming an optimal symbolic rule for DHNN. As for future work, there are several interesting directions that are worth exploring. The proposed RANMAX2SAT can be implemented in another subset of ANN such as Boolean Neural Network [34], Graph Neural Network [35], or Kohonen Neural Network [36]. Due to the nature of RANMAX2SAT, it is interesting to observe the potential cost function of the mentioned ANN variants. In terms of learning phase, recent metaheuristics algorithm such as Black hole Algorithm [37], Driving Training-Based Optimization [38], Honey Badger Algorithm [39], Harmony Search-based Algorithm [40], and Gradient-based Optimizer [41] can also be implemented. The key here is to embed the feature of RANMAX2SAT into the objective function of the mentioned algorithms. Moreover, it would be worth exploring other effective algorithms to ensure the neuron in DHNN always converges to the global minimum energy. For instance, implementation of the Mutation operator [42] and memristor [43] were reported to increase the search space of the DHNN. Finally, the robust DHNN-RANMAX2SATEA has good potential to become good forecasting model for various real-life modeling that is random in nature such as flood modeling, seismic modeling, and tsunami modeling. This can inspire the next implementation of large-scale logic mining design incorporated with DHNN-RANMAX2SATEA, which has the ability to classify and forecast.

Author Contributions

Conceptualization, project administration, formal analysis, and writing, V.S.; validation, M.F.M.; supervision, M.S.M.K.; writing—review and editing, N.E.Z.; writing—review and editing, S.S.M.S.; validation, S.Z.M.J.; validation and funding acquisition, M.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research is fully funded and supported by Universiti Sains Malaysia, Short Term Grant, 304/PMATHS/6315390.

Data Availability Statement

Not applicable.

Acknowledgments

All the authors gratefully acknowledged the financial support from the “Universiti Sains Malaysia, Short Term Grant, 304/PMATHS/6315390”. We would like to express great appreciation to Revin Samuel James Selva Kumar for his helpful support in completion of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

To better understand the Algorithm 3, the process Election Algorithm will be explained with an illustrative example. EA is utilized to obtain the optimal solution for P R M 2 S A T that minimizes the cost function during the training phase of the DHNN. Example of RANMAX2SAT formulation is taken from Equation (6).
(i)
Initialization population and forming initial parties.
The population that consists of candidates and voters will be initialized. In this illustrative example, 20 individuals of a random population, N P o p E A will be initialized. The individuals consist of candidates and voters and can be represented as S i = [ S 1 , S 2 , S 3 , , S 20 ] where S i = { 1 , 1 } . The solution space partitioned into 4 parties. Thus, the population of 20 are divided into 4 parties. Each individual will be calculated for their eligibility based on Equation (30). The individual with the highest eligibility will be the first candidate and will be highlighted as orange. Table A1, Table A2, Table A3 and Table A4 show the voters and candidates in Party 1, Party 2, Party 3, and Party 4, respectively.
Table A1. S 2 selected as candidate in Party 1.
Table A1. S 2 selected as candidate in Party 1.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 1 11−1−111116
S 2 −1111−1−1116
S 3 1−111−1−1−1−14
S 4 1111−1−1−1−14
S 5 11−11−1−1−115
Table A2. S 6 selected as candidate in Party 2.
Table A2. S 6 selected as candidate in Party 2.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 6 1111−1−1116
S 7 −11−11−1−1−1−14
S 8 1−1−1−1−11−1−14
S 9 1−1−1−1−11−115
S 10 1−111−1−1−1−14
Table A3. S 14 selected as candidate in Party 3.
Table A3. S 14 selected as candidate in Party 3.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 11 111111−1−15
S 12 11−1−111−1−14
S 13 111−1−1−1−114
S 14 −1111−1−1116
S 15 −1−1−1−111−1−14
Table A4. S 18 selected as candidate in Party 4.
Table A4. S 18 selected as candidate in Party 4.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 16 1111−1−1−1−14
S 17 −1−1−1−1−11−1−14
S 18 −11−1−1−11115
S 19 −1−1−1−111−1−14
S 20 −1−1−1−1−1−1−1−13
(ii)
Positive Advertisement
The number of voters, v i j that will be influenced by the candidate, L j calculated by Equation (32) with σ p = 0.5 . Therefore, N S = 2 influenced voters will be selected randomly. The number of neuron states that will be updated by the influenced will be determined based on the Equation (34). The candidate L j will be replaced if a voter has a higher eligibility value than that candidate. Note that the individual highlighted with red is denoted as the new candidate, L j . The individual highlighted green is denoted as an old candidate, L j . The individual highlighted with blue is denoted as influenced voters. The neuron state that has been updated is highlighted with yellow. Table A5, Table A6, Table A7 and Table A8 show the process of positive advertisement in Party 1, Party 2, Party 3, and Party 4, respectively.
Table A5. S 2 remained as candidate in Party 1.
Table A5. S 2 remained as candidate in Party 1.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 1 11−1−111116
S 2 −1111−1−1116
S 3 1−11111−1−15
S 4 111111−1−15
S 5 11−11−1−1−115
S 3 and S 4 are the influenced voters and have undergone state flipping process. Since S 3 and S 4 have lower fitness than S 2 ; S 2 will remain as the candidate.
Table A6. S 6 remained as candidate in Party 2.
Table A6. S 6 remained as candidate in Party 2.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 6 1111−1−1116
S 7 −11−1111−1−15
S 8 1−1−1−11−1−1−14
S 9 1−1−1−1−11−115
S 10 1−111−1−1−1−14
S 7 and S 8 are the influenced voters and have undergone state flipping process. Since S 7 and S 8 have lower fitness than S 6 and S 6 will remain as the candidate.
Table A7. S 13 selected as candidate in Party 3.
Table A7. S 13 selected as candidate in Party 3.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 11 111111−1−15
S 12 11−1−11−11−15
S 13 111−1−11117
S 14 −1111−1−1116
S 15 −1−1−1−111−1−14
S 12 and S 13 are the influenced voters and have undergone state flipping process. Since S 13 has higher fitness than S 14 , S 13 will be selected as the candidate.
Table A8. S 19 selected as candidate in Party 4.
Table A8. S 19 selected as candidate in Party 4.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 16 1111−1−1−1−14
S 17 −1−1−1−1−11115
S 18 −11−1−1−11115
S 19 −1−1−1−111116
S 20 −1−1−1−1−1−1−1−13
S 17 and S 19 are the influenced voters and have undergone state flipping process. Since S 19 has higher fitness than S 18 , S 19 will be selected as the candidate.
(iii)
Negative Advertisement
The number of voters v i * that will be attracted by the candidate L j can be calculated by Equation (28) with σ n = 0.5 . Note that Party 1 will attract voters from Party 3 and Party 2 will attract voters from Party 4. The number of neuron states that will be updated by the influenced will be determined based on the Equation (38). The candidate L j will be replaced if a voter has a higher eligibility value than that candidate. Note that Individual highlighted with red is denoted as the new candidate, L j . The individual highlighted green is denoted as an old candidate, L j . The individual highlighted with blue is denoted as attracted voters v i * in the new party and gray in the old party. The neuron state that has been updated is highlighted with yellow. Table A9, Table A10, Table A11 and Table A12 show the process of negative advertisement in Party 1, Party 2, Party 3, and Party 4, respectively.
Table A9. S 2 remained as candidate in Party 1.
Table A9. S 2 remained as candidate in Party 1.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 1 11−1−111116
S 2 −1111−1−1116
S 3 1−11111−1−15
S 4 111111−1−15
S 5 11−11−1−1−115
S 15 −1−1−1−111116
Party 1 gained S 15 from Party 3. S 2 will remain as the candidate Party 1.
Table A10. S 18 selected as candidate in Party 1.
Table A10. S 18 selected as candidate in Party 1.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 6 1111−1−1116
S 7 −11−1111−1−15
S 8 1−1−1−11−1−1−14
S 9 1−1−1−1−11−115
S 10 1−111−1−1−1−14
S 18 −11111−1117
Party 2 gained S 18 from Party 4. Since S 18 has higher fitness than S 6 , S 18 will be selected as the candidate.
Table A11. Party 3 lost S 15 to Party 1.
Table A11. Party 3 lost S 15 to Party 1.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 11 111111−1−15
S 12 11−1−11−11−15
S 13 111−1−11117
S 14 −1111−1−1116
S 15 −1−1−1−111−1−14
Table A12. Party 4 lost S 18 to Party 2.
Table A12. Party 4 lost S 18 to Party 2.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 16 1111−1−1−1−14
S 17 −1−1−1−1−11115
S 18 −11−1−1−11115
S 19 −1−1−1−111116
S 20 −1−1−1−1−1−1−1−13
(iv)
Coalition
Two parties will be grouped together where the individual with the highest eligibility value in the coalition party will be candidate L j . The number of neuron states that will be updated by all voters v i * will be determined based on the Equation. The candidate L j will be replaced if a voter has a higher eligibility value than the candidate. Note that the individual highlighted with red is denoted as the new candidate L j . The neuron state that has been updated is highlighted with yellow. Table A13 shows the coalition of Party 1 and Party 4 and Table A14 shows coalition of Party 2 and Party 3.
Table A13. Coalition of Party 1 and Party 4.
Table A13. Coalition of Party 1 and Party 4.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 1 −1−111−1−1−1−16
S 2 −1111−1−1116
S 3 11−1111115
S 4 1111−1−1115
S 5 11−1−111115
S 15 1111−1−1−1−14
S 16 1111−1−1−1−16
S 17 1111−1−1114
S 19 1111−1−1−1−16
S 20 −1−1−1−1−1−1113
Note that Party 1 coalited with Party 4. The individual S 2 remained as candidate of this coalition party.
Table A14. Coalition of Party 2 and Party 3.
Table A14. Coalition of Party 2 and Party 3.
S i A B X 1 Y 1 X 2 Y 2 Z 1 Z 2 f R M 2 S A T E A
S 6 11−1−111116
S 7 −11−11−1−1−1−14
S 8 1−1−11−1−1−1−14
S 9 1−1−1−1−111−15
S 10 1−1−1−1−1−1−1−13
S 18 −11111−1117
S 11 1111−1−1−1−14
S 12 11−1−1−1−1−1−13
S 13 −1−1−111−1−1−15
S 14 −111−111−116
Note that Party 2 coalited with Party 3. The individual S 18 remained as candidate of this coalition party.
(v)
Election Day
The final eligibility of all candidates from both coalition parties will be compared. If the eligibility value of the candidate is maximum ( f R M 2 S A T E A = 7 ), the candidate will be elected. In this case, since S 18 achieved the maximum eligibility value and higher eligibility value than S 2 , S 18 selected as the winner.

References

  1. Liu, X.; Qu, X.; Ma, X. Improving flex-route transit services with modular autonomous vehicles. Transp. Res. Part E Logist. Transp. Rev. 2021, 149, 102331. [Google Scholar] [CrossRef]
  2. Chen, X.; Wu, S.; Shi, C.; Huang, Y.; Yang, Y.; Ke, R.; Zhao, J. Sensing data supported traffic flow prediction via denoising schemes and ANN: A comparison. IEEE Sens. J. 2020, 20, 14317–14328. [Google Scholar] [CrossRef]
  3. Chereda, H.; Bleckmann, A.; Menck, K.; Perera-Bel, J.; Stegmaier, P.; Auer, F.; Kramer, F.; Leha, A.; Beißbarth, T. Explaining decisions of graph convolutional neural networks: Patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer. Genome Med. 2021, 13, 42. [Google Scholar] [CrossRef] [PubMed]
  4. Lin, Y. Prediction of temperature distribution on piston crown surface of dual-fuel engines via a hybrid neural network. Appl. Therm. Eng. 2022, 218, 119269. [Google Scholar] [CrossRef]
  5. Zhou, L.; Wang, P.; Zhang, C.; Qu, X.; Gao, C.; Xie, Y. Multi-mode fusion BP neural network model with vibration and acoustic emission signals for process pipeline crack location. Ocean. Eng. 2022, 264, 112384. [Google Scholar] [CrossRef]
  6. Hopfield, J.J.; Tank, D.W. “Neural” computation of decisions in optimization problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [CrossRef]
  7. Xu, S.; Wang, X.; Ye, X. A new fractional-order chaos system of Hopfield neural network and its application in image encryption. Chaos Solitons Fractals 2022, 157, 111889. [Google Scholar] [CrossRef]
  8. Boykov, I.; Roudnev, V.; Boykova, A. Stability of Solutions to Systems of Nonlinear Differential Equations with Discontinuous Right-Hand Sides: Applications to Hopfield Artificial Neural Networks. Mathematics 2022, 10, 1524. [Google Scholar] [CrossRef]
  9. Xu, X.; Chen, S. An Optical Image Encryption Method Using Hopfield Neural Network. Entropy 2022, 24, 521. [Google Scholar] [CrossRef]
  10. Mai, W.; Lee, R.S. An Application of the Associate Hopfield Network for Pattern Matching in Chart Analysis. Appl. Sci. 2021, 11, 3876. [Google Scholar] [CrossRef]
  11. Folli, V.; Leonetti, M.; Ruocco, G. On the maximum storage capacity of the Hopfield model. Front. Comput. Neurosci. 2017, 10, 144. [Google Scholar] [CrossRef]
  12. Lee, D.L. Pattern sequence recognition using a time-varying Hopfield network. IEEE Trans. Neural Netw. 2002, 13, 330–342. [Google Scholar] [CrossRef]
  13. Abdullah, W.A.T.W. Logic programming on a neural network. Int. J. Intell. Syst. 1992, 7, 513–519. [Google Scholar] [CrossRef]
  14. Abdullah, W.A.T.W. Logic Programming in neural networks. Malays. J. Comput. Sci. 1996, 9, 1–5. Available online: https://ijps.um.edu.my/index.php/MJCS/article/view/2888 (accessed on 1 June 1996). [CrossRef]
  15. Kasihmuddin, M.M.S.; Mansor, M.A.; Md Basir, M.F.; Sathasivam, S. Discrete mutation Hopfield neural network in propositional satisfiability. Mathematics 2019, 7, 1133. [Google Scholar] [CrossRef]
  16. Sathasivam, S.; Mansor, M.A.; Ismail, A.I.M.; Jamaludin, S.Z.M.; Kasihmuddin, M.S.M.; Mamat, M. Novel Random k Satisfiability for k ≤ 2 in Hopfield Neural Network. Sains Malays. 2020, 49, 2847–2857. [Google Scholar] [CrossRef]
  17. Karim, S.A.; Zamri, N.E.; Alway, A.; Kasihmuddin, M.S.M.; Ismail, A.I.M.; Mansor, M.A.; Hassan, N.F.A. Random satisfiability: A higher-order logical approach in discrete Hopfield Neural Network. IEEE Access 2021, 9, 50831–50845. [Google Scholar] [CrossRef]
  18. Sidik, M.S.S.; Zamri, N.E.; Mohd Kasihmuddin, M.S.; Wahab, H.A.; Guo, Y.; Mansor, M.A. Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization. Mathematics 2022, 10, 1129. [Google Scholar] [CrossRef]
  19. Zamri, N.E.; Azhar, S.A.; Mansor, M.A.; Alway, A.; Kasihmuddin, M.S.M. Weighted Random k Satisfiability for k = 1, 2 (r2SAT) in Discrete Hopfield Neural Network. Appl. Soft Comput. 2022, 126, 109312. [Google Scholar] [CrossRef]
  20. Guo, Y.; Kasihmuddin, M.S.M.; Gao, Y.; Mansor, M.A.; Wahab, H.A.; Zamri, N.E.; Chen, J. YRAN2SAT: A novel flexible random satisfiability logical rule in discrete hopfield neural network. Adv. Eng. Softw. 2022, 171, 103169. [Google Scholar] [CrossRef]
  21. Gao, Y.; Guo, Y.; Romli, N.A.; Kasihmuddin, M.S.M.; Chen, W.; Mansor, M.A.; Chen, J. GRAN3SAT: Creating Flexible Higher-Order Logic Satisfiability in the Discrete Hopfield Neural Network. Mathematics 2022, 10, 1899. [Google Scholar] [CrossRef]
  22. Bonet, M.L.; Buss, S.; Ignatiev, A.; Morgado, A.; Marques-Silva, J. Propositional proof systems based on maximum satisfiability. Artif. Intell. 2021, 300, 103552. [Google Scholar] [CrossRef]
  23. Kasihmuddin, M.S.M.; Mansor, M.A.; Sathasivam, S. Discrete Hopfield neural network in restricted maximum k-satisfiability logic programming. Sains Malays. 2018, 47, 1327–1335. [Google Scholar] [CrossRef]
  24. Sathasivam, S.; Mamat, M.; Kasihmuddin, M.S.M.; Mansor, M.A. Metaheuristics approach for maximum k satisfiability in restricted neural symbolic integration. Pertanika J. Sci. Technol. 2020, 28, 545–564. [Google Scholar]
  25. Tembine, H. Dynamic robust games in mimo systems. IEEE Trans. Syst. Man Cybern. Part B 2011, 41, 990–1002. [Google Scholar] [CrossRef]
  26. Aissi, H.; Bazgan, C.; Vanderpooten, D. Min–max and min–max regret versions of combinatorial optimization problems: A survey. Eur. J. Oper. Res. 2009, 197, 427–438. [Google Scholar] [CrossRef]
  27. Emami, H.; Derakhshan, F. Election algorithm: A new socio-politically inspired strategy. AI Commun. 2015, 28, 591–603. [Google Scholar] [CrossRef]
  28. Sathasivam, S.; Mansor, M.; Kasihmuddin, M.S.M.; Abubakar, H. Election Algorithm for Random k Satisfiability in the Hopfield Neural Network. Processes 2020, 8, 568. [Google Scholar] [CrossRef]
  29. Bazuhair, M.M.; Jamaludin, S.Z.M.; Zamri, N.E.; Kasihmuddin, M.S.M.; Mansor, M.A.; Alway, A.; Karim, S.A. Novel Hopfield neural network model with election algorithm for random 3 satisfiability. Processes 2021, 9, 1292. [Google Scholar] [CrossRef]
  30. Sathasivam, S. Upgrading logic programming in Hopfield network. Sains Malays. 2010, 39, 115–118. [Google Scholar]
  31. Zhi, H.; Liu, S. Face recognition based on genetic algorithm. J. Vis. Commun. Image Represent. 2019, 58, 495–502. [Google Scholar] [CrossRef]
  32. Mansor, M.A.; Sathasivam, S. Accelerating activation function for 3-satisfiability logic programming. Int. J. Intell. Syst. Appl. 2016, 8, 44–50. [Google Scholar] [CrossRef][Green Version]
  33. Sherrington, D.; Kirkpatrick, S. Solvable model of a spin-glass. Phys. Rev. Lett. 1975, 35, 1792. [Google Scholar] [CrossRef]
  34. Zhang, T.; Bai, H.; Sun, S. Intelligent Natural Gas and Hydrogen Pipeline Dispatching Using the Coupled Thermodynamics-Informed Neural Network and Compressor Boolean Neural Network. Processes 2022, 10, 428. [Google Scholar] [CrossRef]
  35. Jiang, W.; Luo, J. Graph neural network for traffic forecasting: A survey. Expert Syst. Appl. 2022, 207, 117921. [Google Scholar] [CrossRef]
  36. Yang, B.S.; Han, T.; Kim, Y.S. Integration of ART-Kohonen neural network and case-based reasoning for intelligent fault diagnosis. Expert Syst. Appl. 2004, 26, 387–395. [Google Scholar] [CrossRef]
  37. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  38. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  39. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  40. Zainuddin, Z.; Lai, K.H.; Ong, P. An enhanced harmony search based algorithm for feature selection: Applications in epileptic seizure detection and prediction. Comput. Electr. Eng. 2016, 53, 143–162. [Google Scholar] [CrossRef]
  41. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  42. Hu, L.; Sun, F.; Xu, H.; Liu, H.; Zhang, X. Mutation Hopfield neural network and its applications. Inf. Sci. 2011, 181, 92–105. [Google Scholar] [CrossRef]
  43. Wu, A.; Zhang, J.; Zeng, Z. Dynamic behaviors of a class of memristor-based Hopfield networks. Phys. Lett. A 2011, 375, 1661–1665. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.