S-Type Random k Satisﬁability Logic in Discrete Hopﬁeld Neural Network Using Probability Distribution: Performance Optimization and Analysis

: Recently, a variety of non-systematic satisﬁability studies on Discrete Hopﬁeld Neural Networks have been introduced to overcome a lack of interpretation. Although a ﬂexible structure was established to assist in the generation of a wide range of spatial solutions that converge on global minima, the fundamental problem is that the existing logic completely ignores the probability dataset’s distribution and features, as well as the literal status distribution. Thus, this study considers a new type of non-systematic logic termed S-type Random k Satisﬁability, which employs a creative layer of a Discrete Hopﬁeld Neural Network, and which plays a signiﬁcant role in the identiﬁcation of the prevailing attribute likelihood of a binomial distribution dataset. The goal of the probability logic phase is to establish the logical structure and assign negative literals based on two given statistical parameters. The performance of the proposed logic structure was investigated using the comparison of a proposed metric to current state-of-the-art logical rules; consequently, was found that the models have a high value in two parameters that efﬁciently introduce a logical structure in the probability logic phase. Additionally, by implementing a Discrete Hopﬁeld Neural Network, it has been observed that the cost function experiences a reduction. A new form of synaptic weight assessment via statistical methods was applied to investigate the effect of the two proposed parameters in the logic structure. Overall, the investigation demonstrated that controlling the two proposed parameters has a good effect on synaptic weight management and the generation of global minima solutions.


Introduction
A Discrete Hopfield Neural Network (DHNN) is a significant type of Artificial Neural Network (ANN) that employs a learning model based on association features formulated by Hopfield and Tank [1].ANNs have long been used as a mathematical method with which to solve a range of issues [2][3][4][5][6][7][8].DHNN is a recurrent ANN with feedforward connections that comprise interconnected neurons in which every neuron output is fed back into every neuron input.Neurons are stored in either a binary or bipolar form in the input and output neurons of the DHNN structure [9].Further, to approximate optimization solutions for problems, the structures of DHNN have been extensively modified.This network has many interesting behaviors.Fault tolerance is also a feature of the Content Addressable Memory (CAM) technique, which has an infinite capacity for pattern storage and is useful for its converging iterative process [10].Numerous applications have made use of DHNNs, including optimization problems [1], clinical diagnosis [11][12][13], the electric power sector [14], the investment sector [15], location detectors [16], and others.Despite the importance of using the intelligent decision systems of the DHNN to solve optimization problems, it is necessary to implement the symbolic rule to guarantee that the DHNN always converges to the ideal solution, because recent studies failed to conduct a thorough analysis of a DHNN based on neural connections.This issue was solved by Wan Abdullah [17], who suggested a logical rule for ANNs by associating each neuron's connection with a true or plausible interpretation.
The Wan Abdullah approach is a novel approach, and it is interesting to note that the synaptic weight is determined by matching the logic cost function and the Lyapunov energy function.This approach led to better performance than traditional teaching techniques such as Hebbian learning with respect to obtaining the synaptic weight during the training phase.A more specific logical rule has been developed since the logical rule was first introduced in the original DHNN.Sathasivam [18] decided to expand the work of Wan Abdullah and proposed Horn Satisfiability (HORNSAT) as a new Satisfiability (SAT) concept.This study introduced the Sathasivam method of relaxation to improve the finalized state of neurons.This proposal demonstrates the strong capabilities of the HORNSAT in terms of reaching the absolute minimum amount of energy.The outcome demonstrates that logical rules can be included in DHNNs.Nevertheless, because DHNNs relax too quickly and offer fewer possibilities for neurons to interchange information, more local minimum solutions result, which makes it difficult to understand how different logical rules affect DHNNs.This motivated the emergence of a new era of research with different perspectives, beginning with Kasihmuddin et al. [9], who introduced systematic k Satisfiability (kSAT) for k = 2, 2 Satisfiability (2SAT).With each clause containing two literals and all clauses joined by a disjunction, the implementation of 2SAT in a DHNN was reported to achieve a high global minima ratio while keeping computational time to a minimum.Subsequently, Mansor et al. [19] continued the research by proposing a high degree of order of kSAT for k = 3, namely, 3 Satisfiability (3SAT), in a DHNN.With each clause containing three literals and all clauses joined by a disjunction, the proposed 3SAT in a DHNN increases the storage capacity of a network because each neuron's number of local minimum solutions tends to be low.Despite the success of the implementation of systematic logic in DHNNs, this approach lacks control with respect to distributing the number of negative literals as well as regarding a variety of clauses.Furthermore, as the number of such neurons increases, the efficiency of the training phase in the DHNN decreases.During the testing phase of DHNNs, there is less neuronal variation.Sathasivam et al. [20] clarified that the rigidity of the logical structure contributes to overfitting solutions in DHNNs.When the number of neurons is large, the restricted number of literals per clause results in suboptimal synaptic weight values, thereby decreasing the likelihood of locating diverse global minima solutions.The necessity of variance in the recovered solutions ensures that the search space is well-explored.Further stated by [21], DHNNs are still vulnerable to various challenges, including a lack of generality as a result of non-flexible logical rules and a strict logic structure, despite the fact that the accuracy of research acquired from the real-world dataset has been satisfactory.
Due to the need for a different logical clause set that contributes to the degree of connection between the logical formulae, Sathasivam et al. [20] proposed a non-systematic SAT called Random k Satisfiability (RANkSAT) by using first-order and second-order logic 2SAT in conjunction, where k = 1, 2; Random 2 Satisfiability (RAN2SAT); and all clauses are connected by disjunction.RAN2SAT introduces a flexible logic structure that contributes to the generation of more logical inconsistency, which expands the diversity of synaptic weights.The proposed RAN2SAT in a DHNN achieved about 90% of the global minima ratio with fewer neurons.Due to the necessity of increasing the storage capacity of RAN2SAT and dealing with the absence of interpretation in a typical systematic satisfiability logic and limited k ≤ 2, Karim et al. [22] were inspired to resolve this problem and thus proposed a flexible logic structure that increases storage capacity by incorporating third-order clauses into the formulation.Random 3 Satisfiability (RAN3SAT) suggests three logical (k = 1, 3; k = 2, 3; and k = 1, 2, 3) literal structures per clause, and for all clauses to be joined by a disjunction.This increases the capacity of the DHNN to recover neuronal states based on different logical orders, which can lead to a variety of convergent interpretations of global minimum solutions.Both RANkSAT types experience difficulty regarding the selection system in terms of the composition represented by the first, second, and third logical formulations, which is still poorly defined.Thus, the combination of correct interpretations is restricted to the number of k-order clauses with a predefined term assigned in the logical formula.
Another fascinating study on non-systematic logic with a different perspective was introduced by Alway et al. [23]; this solution increases the representation of 2SAT compared to 3SAT clauses in non-systematic SAT logic through an assigned 2SAT ratio (r*) in DHNN in order to decrease the duplication of final neuron state patterns.The proposed Major 2 Satisfiability (MAJ2SAT) in the DHNN successfully provides more neuronal variation.Zamri et al. [24] introduced Weighted Random k Satisfiability (rSAT) as a non-systematic method with a proposed logical structure that ideally produces the proper rSAT logical structure using a Genetic Algorithm (GA) by taking into account the desired proportion of negative literals (r).Another method introduced by Sidik et al. [25] consisted of altering the rSAT logic phase by adding a binary Artificial Bee Colony algorithm to guarantee that negative literals are distributed properly.The proposed rSAT in a DHNN with a weighted ratio of negative literals leads to a significant global minima ratio.Nonetheless, despite this significant advancement in controlling the logical structure of selecting clauses and using a metaheuristic approach to distribute the number of negative literals, these techniques fail to account for the representation of the probability distribution of the dataset in the selection system.
Unique, flexible logical systems were formed by combining systematic and nonsystematic approaches with a unique perspective.This approach leads to a great potential for solution diversity as it randomly generates a number of clauses.Guo et al. [26] proposed Y-Type Random 2 Satisfiability (YRAN2SAT), in which a number is randomly assigned to the first-order and second-order clauses, while further final states can be retrieved by YRAN2SAT in a DHNN with the minimum global energy.With high order logic, Gao et al. [27] proposed a G-Type Random k Satisfiability (GRAN3SAT) system, in which a set of clauses of first, second, and third orders is randomly generated.In a DHNN, GRAN3SAT can exhibit a larger storage capacity and is capable of investigating complex dimensional issues.Despite this success, its system of selection still has a flaw: there is no clear system with which to control a distribution over the desired number of negative literals based on the probability distribution of a dataset.
The Probabilistic Satisfiability problem (PSAT) involves assigning probabilities to a set of propositional formulations and deciding whether this assignment is consistent.The pioneering work was introduced by George Boole [28] as another perspective.He proposed the PSAT to determine if he could discover a probability measure for truth assignments that satisfy all assessments.The PSAT framework was developed to demonstrate these details as logical sentences with linked probabilities to infer the likelihood of a query sentence.The PSAT was initially suggested by George Boole and, subsequently, was refined by Nilsson [29].This intelligent perspective was followed by different studies [30][31][32][33], which all aimed to integrate the probability tools into satisfiability without considering their implementation in a DHNN.The present study addresses this gap by introducing a probability distribution to the prevailing attribute in the data set, which is represented in a DHNN through desire logic.
There are no studies in this area regarding the way in which the probability distribution for literals with SAT may be represented in a DHNN.Thus, the findings addressing this issue can be used to guarantee the most effective search for satisfying interpretations.Therefore, this study introduces S-type Random k Satisfiability (δkSAT), where k = 1, 2 (δ2SAT) and with the probability distribution of the prevailing attribute in the simulation dataset.It aims to address the problem regarding RANkSAT, where k randomizes structural issues by utilizing two statistical features, the probability distribution and the sample size formula, to obtain an estimator for the binomial distribution dataset.In addition to helping to assign the negative literal that was mapped to the prevailing attribute in a dataset with a non-systematic logical RAN2SAT, the main feature of RAN2SAT is its structural flexibility, which takes advantage of another logical rule, 2SAT, whereas the non-systematic logical rule provides a more diversified solution [34,35].Furthermore, the probability distribution is used to control the composition's probability of appearing in first-and second-order logic to avoid a poorly explained or lack of interpretation in non-systematic SAT by providing suitable logical combinations depending on the dataset's distribution.Moreover, the logic system uses the binomial distribution's sample size to determine the appropriate number of negative literals based on the predetermined proportion appearing in the dataset.Then, the clauses are distributed in each order depending on the probability distribution governing appearance.This approach will help us determine the appropriate weight of a negative literal number in logic systems based on the distributed clauses in order to create suitable solutions [24].Notably, researchers tend to neglect negative literals because they are indirectly mapped errors in a logical structure [36]; however, in this study, negative literals represent the prevailing attribute in a binomial distribution that has only two characteristics.
Our proposed logical rule will provide flexibility with respect to controlling the overall structure of δ2SAT in terms of the dataset's characteristics by combining both the effects of statistical parameters and non-systematic features to identify suitable neuronal variation and diversity in the proposed logic.The main aims of this study are as follows: The framework of this paper is as follows: The motivation for this study is described in detail in Section 2. An overview of δ2SAT's structure is given in Section 3. The integration of δ2SAT into a DHNN is described in Section 4. Section 5 explains the experimental setup and performance assessment metrics incorporated into the simulation.In Section 6, the effectiveness of the proposal logic in a DHNN is discussed and analyzed, with comparisons made to several existing logical structures with regard to various parameters and phases.The conclusions and future work are presented in Section 7 at the end of the article.

Issue with the Identified Probability Distribution
With reference to the structural issue regarding existing systematic and non-systematic satisfiability, that is, the systematic logic kSAT [19,37], the relevant approaches in this respect implement random selection for the literal states from within clauses, where the clauses are selected uniformly, without regard to the individual probability or chance of appearing in the required population dataset.Whereas the non-systematic logic RANkSAT [20,22] structure is defined randomly, wherein the clauses are selected uniformly.Moreover, the chance of obtaining both negative and positive literals is uniformly distributed [38], with both outcome having an equally likely chance of appearing.This implies that the population follows a uniform distribution and is thus considered a limited option.In this study, we address this research gap by giving the clauses and negative literals inside clauses the priority of a population dataset's probability distribution, and when the dataset has two characteristics, i.e., negative and positive literals, we assign the negative literal for the prevailing attribute that is withdrawn from a binomial distribution.

Initialization for the Number of Clauses and Number of Neuron
The investigation into controlling the general structure of SAT is still ongoing.Cai and Lei's [39] work proposed a Partial Maximum Satisfiability (PMAXSAT) clausal weighting mechanism, with a positive integer as its weight.This method demonstrated the power of weight in terms of controlling the distribution of a logical structure based on the desired result.Conversely, Always et al. [23] suggested a non-systematic logical rule, MAJ2SAT, which seeks to create bias in the selection of 2SAT over 3SAT via the r* ratio.The MAJ2SAT system successfully provides more neuronal variations that increase the composition of the 2SAT with the same number of neurons.Despite the benefit of extracting information from real datasets that exhibit the behaviors of 2SAT and 3SAT, the persistent issue is the system of selection, which limits the value of r in the set of limited pre-defined intervals and is chosen randomly without considering a dataset's probability distribution.Therefore, we propose the non-systematic logical rule δ2SAT, which incorporates a probability logic phase to calculate the probability of first-and second-order clauses appearing from the dataset by determining the required number of literal and clauses.

Initialization for the Number of Negative Literals
The structure of SAT should be subjected to a systematic analysis to avoid the poor description of a dataset.Dubois and Prade [40] examined the role of logic in dealing with uncertainty in an ANN.The work concluded that it was crucial to use the generalization method to determine how many negative literals should be distributed for technical convenience.Zamri et al. [24] introduced rSAT with the (logic phase) as a new phase to produce a non-systematic logical structure based on the ratio of negative literals.The ratio is generated in the logic phase by employing GA to increase the logic phase's effectiveness.Nevertheless, the findings showed that the proposed model performed well, indicating that having a dynamic distribution of negative literals will benefit the generation of global minimum solutions with different states of the final neurons.One of the limitations of the weighting scheme is the method of choosing the number of negative literals, where the value of r is in the set of limited pre-defined intervals and is subject to the issue of random system selection without considering the probability distribution of literals.
Alway and Zamri's studies motivated the current study, in which we propose the nonsystematic logical rule δ2SAT, which incorporates a probability logic phase to calculate the appearance-related probability distribution in the first-order and second-order clauses from the real dataset by predetermining the required number of neurons or number of clauses via harnessing the behavior of 2SAT so as to explore a wider solution space and extract information from datasets, as well as assign the number of negative literals required for logic by using the sample size formula with a predefined, prevailing attribute proportion from the dataset that will be exposed in the logic.

Synaptic Weight Performance Using Statistical Analysis
The research on satisfiability in DHNNs suffers from a lack of statistical analysis, especially in terms of synaptic weight, which is considered the backbone for the global minimum solution achieved during testing phases.We determine synaptic weight by contrasting the cost function with Lyapunov energy.The previous studies on systematic and non-systematic approaches were limited in terms of assessing the performance accuracy of the logic in different phases, as mentioned in [9,21,22].The synaptic weight was analyzed at several points in this study since they were not completely comprehensible in [20,26], wherein the authors describe the dimensions of the synaptic weight values.In addition, [27] measured the accuracy of the error in the synaptic weight by evaluating the differences between the synaptic weight obtained by Wan's method and the synaptic weight achieved in the training phase.The gap was addressed in this study by using new statistical tests to capture the impact of changing the synaptic weight during training phases due to the absence of statistical tools in the synaptic weight analysis.

S-Type Random 2 Satisfiability Logic
S-Type Random 2 Satisfiability (δ2SAT) is a new category of non-systematic-clause SAT in which the probability distribution is used to assign prevailing attributes in the dataset via two methods: First, depending on the dataset requirements, we assigned the probability of the appearance of first-and second-order logic.Second, we used the sample size from a binomial population [41] to ascertain the appropriate number of negation literals inside each clause based on its assigned probability since the probability of a negative literal appearing follows a binomial distribution.The novelty of the mentioned methods is that they determine the suitable weight of negative literal numbers (ξ) in logic depending on the probability clauses distributed, which will lead to greater structural diversity.In addition, the negative literal number is not fixed, and by increasing or decreasing the probability of obtaining a literal number in the logic system, there is greater flexibility in the dataset.
Our approach can be introduced as a form of non-systematic logic comprising n literals per T clauses.It is a general form of RANkSAT logic, where k = 1,2 is expressed in the k Conjunctive Normal Form (kCNF).The components of the S-Type Random 2 Satisfiability Logic problem are as follows: (a) A set of h variables, τ 1 , τ 2 , τ 3 , . . . . . . . .τ h , where τ i ∈ {−1, 1} for all items in our logic system; (b) A set of h non-redundant literals r i , where r i is the positive (r i ) or a negative (¬r i ) nature of a literal; (c) A set of λ distinguishable clauses, T 1 , T 2 , T 3 , . . . . . . .T λ , where every clause is composed of h literals joined by ∧ logical (AND) Booleans, which is distributed as follows: i.
A set of y second-order clauses: 2 , T 3 , . . . . . .T y , where T The general formulation of S-Type Random 2 Satisfiability is given as follows: where Θ δ2SAT in Equation ( 1) is δ2SAT for k = 1, 2. The difference between δ2SAT and RAN2SAT lies in the selection system for the number of clauses and the number of negative literals in δ2SAT.This system is established under the condition that the number of clauses corresponds to: where λ m denotes the total number of literals λ 1 or total number of clauses λ 2 ; y m and x m denote the number of literals in the first-and second-order clauses or the number of clauses when m = 1, 2, respectively; y m , x m ≥ 0 represent clauses T k i for different values of k; and p(x m ) and p(y m ) denote the probability of first-and second-order logic appearing, which is calculated by the Laplace formula [42] to find the probability A y m from population Ω expressed as follows: A y m represents a number of elements that contain a prevailing attribute from the total number of a dataset |Ω| in this study.We will denote the probability of second-order p(y m ) by Y, which is considered as the first parameter in δ2SAT.
The number of negated literals that exist in each T k i will be determined by ξ, where ξ ∈ N is the negative literal number used to obtain ρ in the dataset [41] and is calculated as follows: where: ρ: The pre-defined negative literal proportion required in the logic system (Second parameter in Logic).
ρ 0 : the negative literal proportion in the population (which is available before the survey; if no estimate of ρ 0 is available prior to the survey, a worst-case value of ρ 0 = 0.5 can be used to determine the sample size).
d: the margin of error (or the maximum error) of the negative literal proportion, which is calculated as follows: Z: the upper α/2 point of the normal distribution when α = 0.01, where Significance Level = P (type I error) = α.
The distribution of the number of negated literals in each order logic clause T k i is dependent on the value β k , where: In (7), β 1 and β 2 denote first-and second-order logic, respectively, and ∑ β k is the total number of negated literals existing in δ2SAT logic, where: The structure of Θ δ2SAT is believed to provide more variations and greater diversity of the final neuron states and to be able to find more global solutions in other solution spaces via two effective parameters: Y and ρ.The implementation of S-type Random k Satisfiability logic in this study is outlined in Figure 1.

Probability Logic Phase in δ2SAT
The probability logic phase was developed to assess the features of a prevailing attribute in the dataset via probability distribution, which are then reflected in the logic system by the two parameters Y and ρ; this differs from the logic phase in rSAT [24], where the phase is established to allocate the correct ratio of the negative literals and the position in the rSAT logic via metaheuristics.The main purpose for the probability logic phase is to extract the required information from the dataset, and then generate the correct structure of RAN2SAT logic depending on the dataset features assigned by the two probability Equations (3) and (5).Subsequently, once the desired logic has been attained, the probability logic phase is complete.This section will introduce some logic generated from the dataset using the two parameters Y and ρ; the restriction in the probability logic phase is as follows: whose probability function can be defined as follows (Nilsson 1986) [29]: its mutually exclusives, then p(r i ∧ r j ) = p(r i ) + p(r j )  According to the applied method for the determination of probability, there are two types of δ2SAT: First, there is the type of probability logic phase that determines the probability of the appearance of the number of first-order logic and second-order logic literals λ 1 and the distribution of the desired number of negative literals in each clause depending on the selected dataset.Second, there is the type of probability logic phase that determines the probability of the appearance of the number of first-order logic and second-order logic clauses λ 2 and the distribution of the desired number of negative literals in each clause depending on the selected dataset.Table 1 introduces some possible examples of two cases of the logic of δ2SAT that can be used to generate the dataset using Equations ( 4), ( 5) and ( 7) when ρ = 0.7.We observe that applying the same probability to more clauses λ 2 results in a reduced number of first-order logic items than applying it to a greater number of neurons λ 1 ; notably, the number of unique logic combinations that a probability logic phase can create by using a specific value of the two parameters Y and ρ is presents the pseudocode for the steps taken to generate the Θ δ2SAT , which starts with the determination of the value of the two parameters Y and ρ; then, by applying the constraint of the logic in Equation ( 9), the probability logic phases operate under the following conditions: (a) ρ = 0.5, because we need to expose the prevailing attribute.(b) The z is a random number generated to ensure the negative values will be distributed in the logic phase randomly.(c) The loop will run w times to ensure that the logic system will be correctly generated.(d) The probability logic phase ends when Equation ( 8) is satisfied, at which point the DHNN training phase begins.The limitation that we observed in δ2SAT's logic structure is the position of negative literals; these are selected randomly depending on z random numbers, and this randomization clearly effects results in an inconsistent interpretation.In addition, there are no redundant literals.Also, due to the high probability of 2SAT, the Exhaustive Search (ES) algorithm is unable to find the best number of instances of first-order logic for a small number of clauses that satisfies Equation (9).The utilization of Θ δ2SAT in a DHNN is presented as DHNN − δ2SAT.In the next section, we clarify how Θ δ2SAT functions as a representational command to control the neurons of the DHNN mappings.

Θ δ2SAT in Discrete Hopfield Neural Network
A DHNN is a type of free, self-feedback information comprising N interconnected neurons with no hidden layers.The neurons are updated one at a time; Ref. [23] asserts that the possibility of neuronal oscillations is eliminated by asynchronous updating.This network has parallel computing, quick convergence, and is also effective in terms of its CAM capacity, which has encouraged researchers to use DHNNs as mediums for solving challenging optimization problems.A general description of the state of activated neurons in a DHNN is provided below: where the synaptic weight from unit i to unit j is W ij .The synaptic weight of a DHNN is always symmetrical, whereby W ij = W ji , and has no self-looping, W ii = W jj = 0. S i represents the state of neuron j; ε is a predetermined threshold value, and in this study, ε = 0 to guarantee a uniform decrease in DHNN energy [18]; and h is the number of logic variables.The δ2SAT is implemented in a DHNN according to the following equation (DHNN − δ2SAT), due to the requirement for a symbolic rule that can control the network's output and decrease logical inconsistency by minimizing the network's cost function.To derive the cost function E Θ δ2SAT of Θ δ2SAT , the following formula can be used: where x 2 and y 2 are the number of clauses.The inconsistency of Θ δ2SAT , denoted as Ψ ij , is specified in Equation (13), as literals are possible in Θ δ2SAT : where r denotes the random literals assigned in Θ δ2SAT .If = 0, which leads to E Θ δ2SAT = 0; this indicates that all clauses in Θ δ2SAT are satisfied with the value of the mean task for the logic program during the training phase (i.e., a consistent interpretation is found).A consistent interpretation will help the logic program to derive the correct synaptic weight of Θ δ2SAT clauses, and the Wan Abdullah (WA) method [17] can be used to directly compare the cost function and Lyapunov energy function of the DHNN to determine the values of W ij .However, it is noted that the DHNN's synaptic weight can be effectively trained using a traditional approach such as Hebbian learning [1]; nevertheless, Ref. [43] demonstrated that the (WA) method, when compared to Hebbian learning, can achieve the optimal synaptic weight with minimal neuron oscillation.Synaptic weight is a building block (matrix) of CAM.Therefore, a specific output-squashing mechanism will be applied to every neuron in DHNN − δ2SAT via the Hyperbolic Tangent Activation Function (HTAF) to retrieve the correct logic pattern of the CAM; according to Karim et al. [22], the equation is expressed as follows: A DHNN's testing phase allows for the asynchronous updating of the neuronal state based on the following equation: h i represents the network's local field, where W (2) ij is the second-order synaptic weight and W (1) j is the first-order synaptic weight.By applying the HTAF to the h i values, the final state of the neurons is retrieved, and the neuron states S i (t) are updated by: The information that results in E Θ δ2SAT = 0 must be present in the neuron's final state [44], which corresponds to H Θ δ2SAT , the Lyapunov energy function [18]: The convergence of the energy will indicate when the degree of convergence has reached a stable state according to [22].This is supported by Sathasivam [18], who states that if a DHNN is stable and oscillation-free, the Lyapunov energy will reach its lowest value (the equilibrium state).Hence, [45] a DHNN will always converge to the global minimum energy.One can see the convergence of the final neuron state based on the following Equation: where H min Θ δ2SAT , the final neuron state, produces the anticipated global minimum energy and is calculated as follows: where x 2 and y 2 denote the number of first-and second-order clauses, respectively.Algorithm 2 is an example of the DHNN − δ2SAT given in pseudocode, which explains the processes of the training phase and testing phase of DHNN − δ2SAT.Conventionally, the logic program employs a 2 n search space to find consistent interpretations by ES in the training phase.Figure 2 illustrates the schematic diagram of DHNN − δ2SAT.Different orders of k = 1, 2 are shown in two different blocks.In the orange block, there are two inputs and an output (I/O) line, which are green and yellow, representing the two types of logic distributed by clauses and neuron, respectively.Inside the orange box, the second-order clauses are depicted, and every line represents the connection of the neuron state via weights.On the right side, the dashed blue line denotes the first-order clause that is present in this phase as well, with two (I/O) lines: green and yellow.On the inside, the line represents the connection of the neuron state via weights.The satisfied clauses from the two boxes will result in E Θ δ2SAT = 0; the figure only represents the satisfied clauses of Θ δ2SAT .

Experimental Procedure for Testing DHNN -δ2SAT
In this section, we explain the proposed logic output and evaluate it using several evaluation metrics at all phases to guarantee the effectiveness of adding statistical parameters in RAN2SAT, which aimed to produce Θ δ2SAT logic.Furthermore, the simulation platform, the assignment of parameters, and the metrics for performance are all explained.All models were used with the ES algorithm, where the algorithm utilizes trial and error to achieve a cost function that is minimized (E Θ δ2SAT = 0) [23].

Simulation Platform
All simulations were carried out using an open-source software, visual basic C++ (Version 2022), and a 64-bit Windows 10 operating system.To avoid biases in the interpretation of the results, the simulations were run on a single personal computer equipped with an Intel Core i5 processor.The open-source software R studio was used to perform the statistical analysis.Eight different simulations-depending on the statistical parameters (probability and proportion)-were conducted, including those involving different numbers of clauses and neurons.In addition, different numbers of logic combinations (η) were tested in this study.Each simulation's specifics are as follows: (a) Various range of parameter Y.This section assesses and examines the effects of the various probabilities that can be obtained from the dataset applied to δ2SAT.The performance metrics at each phase and the effect of parameter alterations on Θ δ2SAT were determined.(b) Various proportions of negative literals, ρ.In this section, we evaluate the impact of different proportions of negative literals on Θ δ2SAT , evaluating the performance metrics at each phase and determining the effects of parameter alterations on the proposed logic.(c) A variety of logic structure analyses.In this section, we compare Θ δ2SAT with a number of well-known logical rules in terms of the diversity-satisfying clauses of the logical rule.(d) Synaptic weight mean analysis for Θ δ2SAT models' simulation includes boxplot and whiskers and a probability function curve.

The Parameter Setting in Probability Logic Phase
The proposed model incorporates a probability logic phase.As we previously mentioned, there are two types of Θ δ2SAT depending on the probability that is applied to the number of neurons or the number of clauses.Numerous types of simulations are conducted to examine the impacts of different probabilities and several types of expected negative literal proportions on the dataset, in which the probability logic phase is dependent upon the dataset.The different probability logic phase will be denoted as δ γ 2SATρ, where γ = 1, 2 (1 refers to the probability with respect to the number of neurons and 2 refers to the probability of the number of clauses), and ρ refers to the negative literal proportion; the overall model can be denoted as δ 1 2SAT 0.9 .Another type of logic is possible if the range of the probability parameter Y with respect to the number of neurons or clauses stated in the simulation step generates only one type of neuron or clause state, and this will yield a systematic 2SAT during initialization, which is not covered in this study; alternatively, the first-order logic clauses will correspond to more than second-order logic.When this occurs, the proposed system's structural benefit cannot be seen because only one specific type of solution can be found in the final neuron state.In order to prevent these two types of logic, it is proposed that Y > 0.5, wherein more features of second-order as opposed to first-order logic are implemented in the DHNN.In parallel, to determine the range proportion, we proposed ρ > 0.5 to determine the correct number of negative literals that represent the prevailing attribute in the dataset, and we also considered ρ 0 = 0.5 since there is no available information prior to the survey; the symbols of the stages are presented in Table 2.

Parameter Setup of DHNN − δ γ 2SATρ
All simulations were run with 100 logical combinations (η = 100).This method aids the DHNN model's analysis and the approximate evaluation of the efficacy of the proposed logic in a DHNN with various distributions of the two parameters Y and ρ.The number of total literals in the logic system is represented by the number of neurons (λ 1 ) in the DHNN.We chose a specific number of neurons: 5 < λ 1 < 50.For the DHNN, we apply a relaxation procedure in accordance with [18].We select R = 3 in this context because a further reduction in the potential neuron oscillation has been observed, and a value of R greater than 4 will yield the same outcome as [27].Table 3 summarizes the establishment of all the parameters necessary for DHNN − δ γ 2SATρ.In addition, it is notable that each δ γ 2SATρ has a neuron combination that is equivalent to the other DHNN logic systems, which eliminates the issue of a small sample size.

Performance Metrics
The objective of each phase includes the evaluation of the performance of the proposed model.Therefore, this study will utilize several performance metrics to assess the efficacy of each simulation in the different phases with respect to the DHNN − δ γ 2SATρ model to verify the effectiveness of the proposed logic system in terms of the probability logic, learning, and testing analysis phases.

Assessment Logic Structure
The probability logic phase is the phase in which the correct logic sequence is generated and that controls the number of clauses and negative literals by solving Equations ( 3), ( 5) and (7).We attempt to evaluate the features of the output logic by comparing it with other models to guarantee well-produced logic in terms of clauses and negative numbers, which will the acquirement of the minimum cost function given in Equation (12).To determine the appropriate synaptic weight based on the main objective of this phase, we express three features: (a) the number of negative literals affected by parameter ρ, (b) the weights of the second-order logic clauses affected by parameter Y, and (c) the full-negativity second-order logic clauses affected by the two parameters Y and ρ.The goal is to compare these features to determine whether the probability logic phase will be successful in achieving the desired logic system by changing this parameter and demonstrating its excellence with respect to expressing the logic features.The parameter ρ controls the proportion of negative literals; hence, in this section, we test the effectiveness of this parameter based on several different aspects, which are provided below.
The proportion of negativity: in the probability logic phase, the optimal value of negative literals in the logic system will be assigned ξ, which is a constant ratio that is dependent on λ 1 , and the probability of negative literals in the logic system will be computed using the following equation: Probability Of total Negativity (PON): Equation ( 20) is derived from a Laplace formula [42]; we need to test whether the change in ρ will affect the probability of a negative literal structure occurring in the two types of logic compared to other forms of logic that introduce random proportions of negative literals in the logic structure.When compared to other types of logic, this matrix's scale, if corresponding to the necessary proportion, gives us the correct negative literal probability in the logic structure.While analyzing the deviation of the negative literal in terms of the whole logic system, we introduce a second measure to determine the state of the negative literals in the whole logic system, as shown below: Negativity Absolute Error (NAE): The proposed NAE scale measures the amount of error that is not negative if it fits the desired proportion in Equation ( 5).The optimal NAE is zero, which is equivalent to the required number of negative literals.
The probability of the full negativity of second-order logic: Full negativity second-order (¬r i ∨ ¬r j ) logic helps us to represent a greater number of the attributes in the final solution.The main objective of the δ2SAT is to control the number of negative literals and second-order logical items in the logic structure.We need to expose the features of second-order logic as mentioned previously to fully enjoy the benefits of 2SAT in terms of our proposed logic system.Therefore, the next measure is presented as follows: Full-Negativity Absolute Error second clauses (FNAE): where ξ 2SAT is the number of full negativity second-order clauses and λ 2SAT is the number of secondorder clauses in a specific string of logic.The accuracy of the logic will be measured by the FNAE scale in terms of generating the full-negative second-order clauses, which are expressed as (¬r i ∨ ¬r j ), from the rest of the second-order clauses, that is, (¬r i ∨ r j ), (r i ∨ ¬r j ), and (r i ∨ r j ).Similarly, using this scale, we will address the effectiveness degree of the two factor parameters Y and ρ with respect to their significance in terms of altering the second-order clauses.We can determine if the required logic can represent the prevailing attributes by the properties of this measure.The optimal best of FNAE scale is zero, which is equivalent to the required number fully negative second-order clauses.
To address the effect of a parameter Y in the second-order weight, we propose the weighted error measure, which gives the accuracy of the changing of the effect of Y in both proposed logic types when compared to other logic systems, as follows: Weight Full-Negativity Absolute Error (WFNAE): where λ 2SAT is the mean number of second-order clauses, and w(y m ) is the weight of second-order clauses, which equals Y because the Laplace formula determines an equally likely probability for all the elements.Using this measure, we can determine the effect of Y on the amount of deviation of the full negative clauses from the mean.We can calculate the real weight for this deviation by multiplying it with the weighted w(y m ).A large scale signifies a high degree of representation in terms of the weight of the negative strings, which greatly improves our understanding of the weight of dominating attribute in logic.By comparing the scale to the other reasoning and assigning weight to that prioritized (completely negative sentences), the deviance is biased towards.Table 4 lists the symbols that we require during this phase.Total number of neurons ξ The Total of negative literals in logic system p(y m ) The probability of obtaining second-order clauses ξ 2SAT The number full-negativity second-order clauses λ 2SAT The number of second-order clauses λ 2SAT The The Ratio of cumulative neuronal variation

Assessment during the Training Phase
In the training phase, we achieved satisfying assignments of the clauses, which generated the optimal synaptic weights in terms of Θ δ γ 2SATρ by minimizing Equation (12).The Root-Mean-Square Error (RMSE) has been used as a basic statistical metric for measuring the quality of a model's prediction in many fields [24], and it is utilized to identify the quality of the training phase, wherein the value of RMSE training (RMSEtrain) signifies the root square of the error between the neurons' desired fitness value F desierd generated and their current fitness F i [22].The RMSEtrain formula is: The optimal value of the RMSE in the DHNN model is achieved when it is zero, which means the WA method derived the correct synaptic weight.Furthermore, a good model is achieved when the measure is between 0-60.Whereas the Root-Mean-Square Error in synaptic weight (RMSEweight) used will be assessed based on the following formula where W E denotes the Expected synaptic weight obtained by the WA method, and W A is the actual synaptic weight obtained in the testing phases; this measure gives us a complete understanding of the error produced by the WA method, wherein the best result is 0, which corresponds to Equation (12).

Assessment for Testing Phase
In the event that the suggested network satisfies the requirement in Equation ( 18), the proposed DHNN − δ2SAT will act in conformance with the embedded logical rule during the testing phase.The final neuron state will enter a state of minimum energy, which corresponds to the cost function of the proposed DHNN − δ2SAT logical rule.Therefore, based on the synaptic weight generated in the training phase, we evaluate the quality of the retrieved final neuron states (global), namely, the minima solutions.Thus, we apply the next measure as follows: Global minima ratio (R G )-the goal of the global minima ratio is to assess the retrieval efficiency of the DHNN − δ2SAT.The formula of the R G is: where G Θ δ2SAT is the number of global minimum solutions that satisfy condition (18) after being distributed in Equation ( 19), ϕ is the number of trials in the training phase, and η is the logical combination for each run.This metric was frequently used in articles such as [21,38] to assess the proposed DHNN − δ2SAT's convergence property.
The second measure in the testing phase is the Root-Mean-Square Error energy (RMSEenergy) [22], which is used to evaluate the minimization of energy achieved by DHNN − δ2SAT.The energy profile can be determined using RMSE energy : We use RMSEenergy to analyze the converge of δ2SAT to determine the actual energy difference between the absolute minimum energy H min Θ δ2SAT and the final minimum energy H Θ δ2SAT .

Similarity Index
The similarity index [38] and cumulative neuronal variation [24] can be used to evaluate SAT performance using a DHNN.The similarity index values will be compared with benchmark neuron states S max i to determine the quality of each optimal final neuron state that achieved global lowest energy, as indicated in the following formula: where 1 denotes a positive literal of r i , and −1 denotes a negative literal of ¬r i in each clause.It should be noted that the benchmark neuron states are the DHNN model's ideal neuron states that satisfy the conditions in Equation (18).The retrieved final neuron states are compared to the benchmark neuron states indicated in Table 5 to provide a comprehensive comparison of the benchmark neuron states and final neuron states.
The overall comparison of the benchmark and final neuron states is conducted as follows [9]: According to Case 1 in Θ δ2SAT given in the examples in Table 1, the final neuron states are generalizable, as follows: In this study, we selected a well-known measure with which to determine the similarity index for diverse perspectives, namely, that developed by Sokal and Michener (Sokal) [46], which will be employed to evaluate the viability of the recovered final neuron states.It should be noted that Sokal measures the similarity of negative cases of S i with S max i over a range of (0, 1).The formulation is as follows: f + e f + e + h + g (30) The Ratio of Cumulative Neuronal variation (R tv ) is used because the testing phase uses the DHNN's ability to directly memorize the final neuron states ratio without the need to create a new state.This is expressed as follows: where E i denotes the points scores used to assess the difference between newly recovered final neuron states and the benchmark neuron states.The symbol that we require for this Testing and Training phase is shown in Table 4.

Comparison of Method and Baseline Models
Since this study focuses on investigating δ γ 2SATρ performance with respect to its logical behavior, we need to investigate the δ γ 2SATρ's performance in terms of Y and ρ with regard to constructing a good logical structure in the probability logic phase.Therefore, we compare δ γ 2SATρ with the existing logic systems in DHNNs based on the logic structures, testing phases, and the quality of the solution to examine two behaviors relating to logic: (a) The effects of controlling a number of clauses on the second-order weight and non-systematic logic structure.(b) The capability of δ2SAT to control the negative literals and accurately reflect the behavior of the dataset.
In order to examine the logic in a DHNN after its implementation, we also compare its final neuron state's quality to that of RAN2SAT.We also evaluate the variation introduced by the testing phase, global minima solutions, and variation of neurons.The most recent logic systems with a 2SAT structure were selected for this reason, and one of their features was the decision to compare the logic systems' structures.Each clause contains two literals and all clauses are joined by a disjunction.
(a) 2SAT [37]: This is a systematic logical rule that was implemented into a DHNN, with each clause containing two literals.It is a special type of logic of general Boolean satisfiability.Each phrase in the 2SAT model can withstand no more than one suboptimal neuron update, leaving it more akin to a two-dimensional decision-making system.When included into logic mining, this logic system has demonstrated good applicability in task classification.Neuron counts varied from 5 < λ 1 < 50.(b) MAJ2SAT [23]: The initial focus of the effort was on developing the current non-systematic SAT logic structure.MAJ2SAT suggests structural modifications when considering unbalanced clauses.The unbalanced feature result from different compositions of 2SAT and 3SAT.As a result, MAJ2SAT prefers a greater number of 2SAT clauses.Moreover, to avoid any bases, we limited the number of neurons ranging from 5 < λ 1 < 50.(c) RAN2SAT [20]: This system is a second-order and first-order clause logical rule that was implemented in a DHNN as an initial form of non-systematic logic.The δ γ 2SATρ has no structural differences compared to RAN2SAT but consists of a logic probability phase.Due to the connection of the first-order clause, RAN2SAT is reported to provide a greater variety in terms of synaptic weight.Although each literal state was chosen at random, the number of clauses in each order can be determined in advance.Specifically, the number of neurons ranged from 3 < λ 1 < 50.(d) RAN3SAT [22]: This work expanded on the previous work by [20], incorporating higherorder logic of 3SAT clauses in a non-systematic SAT structure, which improved the lack of interpretability of the current non-systematic SAT by storing more neurons per clause.Although the number of clauses for each sequence was selected at random, each literal state was defined.In this case, again, we restricted the number of neurons; the range was 6 < λ 1 < 50.
(e) YRAN2SAT [26]: This system is known as the Y-Type Random 2-Satisfiability logical rule.YRAN2SAT's novelty is introduced by randomly generating first-and second-order clauses.It is a combination of systematic and non-systematic logic.YRAN2SAT can explore the search area with a high potential for solution diversity by adding the features of both clauses.YRAN2SAT introduces remarkable logical flexibility, while the number of all clauses is predefined by the user and the literal states are defined at random.The range of the number of neurons is rSAT [24]: This is a new, non-systematic satisfiability logic class, known as Weighted Random k Satisfiability for k = 1, 2, which includes a weighted ratio of negative literals and adds a new logic phase to produce a non-systematic logical structure based on the number of negative literals specified.More diverse final neuron states were obtained by integrating rSAT into a DHNN.The proposed model showed outstanding promise as an advanced logic-mining model that can be used further in the forecasting and prediction of real-world problems.In this study, we select (r = 0.5) because it has been discovered that it performs well in the logic phase of the rSAT [24].The range of the number of neurons was 5 < λ 1 < 50.

Benchmark Dataset
In this study, the proposed model generated bipolar interpretations randomly from a simulated dataset.More specifically, the logical illustration that was used in the simulations will serve as the foundation for the structure of the simulated data.The simulated dataset is commonly used in the modeling and evaluation of the efficacy of SAT logic programming, as demonstrated in the work of [18,22,27].

Statistical Test
This section provides a brief definition of the statistical measures that will be used in this study for two purposes (description and testing): (a) The measure of central tendency is defined as "the statistical measure that designates a single value as being indicative of a whole distribution" [47].Therefore, we selected two measures: (a) The average, which is known as the arithmetic mean (or, simply, "Mean").It is calculated by adding all of the values in the dataset and dividing them by the number of observations.One of the most significant measures is the central tendency measure.The mean has the disadvantage of being sensitive to extreme values/outliers, especially when the sample size is small.As a result, it is ineffective as a measure of central tendency for a skewed distribution [48].Its formula is expressed as follows: where X denotes the mean, x i represents the set of data, and n* denotes the sample size of the data.(b) The median is the value that, when all observations are arranged in ascending/descending order, occupies the central position.It divides the frequency distribution into two halves, is not biased by outliers, and is determined by the following formula [49]: if n * odd (33) where X denotes the median, and n* denotes the sample size of the data.
(b) The measure of dispersion: Variability measures inform us about the distribution of the data and allow us to compare the dispersion of two or more sets of data.We can determine whether the data are stretched or compressed using dispersion metrics, namely, the Standard Deviation (SD), which evaluates variability by considering the distance between each score and the distribution's mean as a reference point.It is a variance square root and gives an indication of the standard deviation or average separation from the mean.It is presented as follows: (c) The boxplot and whiskers (measure of position): The boxplot (Tukey1977) [50] is a well-known tool for displaying significant distributional features of a dataset.The classical box-plot displays the quartiles Q 1 , Q 2 , Q 3 , and whiskers, where the median is equal Q 2 , which is used to estimate the 25th (Q 1 ) and 75th (Q 3 ) quantiles, thus providing an estimate of the interquartile range The range of the majority of the data (the whisker's length) ends at those values just inside the whisker's "limits" (referred to as "fences" and defined by LF = Q 1 − 1.5 × (IQR) and UF = Q 3 − 1.5 × (IQR), lower (LF) and upper (UF) respectively.Observations outside the whiskers (the outliers), observations beyond the fences [51], plotted individually, are defined as the data points outside the boundaries.When comparing different datasets, the boxplot is particularly helpful.Instead of using a Table of Values, we can quickly compare all reported statistics across numerous datasets.The simple, effective design of the boxplot aids the comparison of summary statistics (location, spread, and range of the data in the sample or batch).(d) The Laplace Principle of Probability states that in a space of elementary events Ω, where each element has the same chance of appearing, the probability of a compound event, A, is equal to the ratio of outcomes that are favorable to the occurrence of all other outcomes.This is demonstrated by the formula in Equation ( 4): (e) The probability density function curve is a schematic illustration of the probability of random variable density function that is given by: where f (x) denotes the probability density function for random variables; the shape provides a visualization of the distribution of continuous random variables and provides the probability that a continuous random variable's value will fall within a specific interval.
(f) The Wilcoxon signed-rank test: The Wilcoxon signed-rank test was introduced for the first time by Frank Wilcoxon in 1945 [52].It is a one-sample location problem-based nonparametric test that is used to test the null hypothesis wherein the median of a distribution equals some value (H 0 : X = 0) for data that are skewed or otherwise (i.e., do not follow a normal distribution).It can be used instead of a one-sample t-test or paired t-test, or for ordered categorical data with a normal distribution.If (p-value ≤ α), the null hypothesis is rejected; this is strong evidence that the null hypothesis is invalid, i.e., the result of the median is significant.The Formula for the Wilcoxon Rank Sum Test (W) for x i independent random variable is: where π = number of pairs whose difference is not 0. W * s = smallest of the absolute values of the sums of x i .The symbols of these statistics are listed in Table 6.The details of the implementation of Θ δ2SAT into DHNN is presented in Figure 3, which contains the probability logic, the learning and testing phases, and the evaluation metric in each phase.

Results and Discussion
In this section, we describe the suggested logical output and evaluate it using a variety of evaluation metrics throughout all three phases to ensure that the addition of statistical tools to the RAN2SAT structure and the produced δ γ 2SATρ logic was effective.Furthermore, the simulation platform, assigned parameters, and the metrics' performance are discussed in this section.It is important to note that we have not considered any optimization during the probability logic phase, as in Zamri et al.'s [24] work; the training phase, as proposed in [21,38]; or the testing phase, as proposed in [9,53].

Logic Structure Capability
The probability phases give us different models in terms of negative literals and second-order logic with respect to the two parameters Y and ρ.Since both parameters fall within the [0,1] interval, we can generate an endless (infinite) number of 2SAT models using both parameters.For the majority of the representations of 2SAT, we chose to use Y (p(y m )) more frequently than p(x m ) so that the results would be in the range (0.6-0.9).In this study, we chose values of ρ greater than 0.5 in the range (0.6-0.9) of the probability logic phases to obtain a greater representation of the negative numbers in order to study the predominating attributes in the dataset, as we previously mentioned.
We selected the most significant differences from the two intervals and designated them as models, which are illustrated in Table 7, in order to examine the efficacy of the two parameters with different numbers of λ m , where 5 < λ 1 < 50 so at improve the benefits compared with other recently developed produced logic systems.Subsequently, we will test two δ γ 2SATρ types with different numbers of λ m , Y, and ρ; these values are selected considering the significant change in probability and negative literals.Notably, values of ρ = 1 will be disregarded because we do not need all literals to be negative because the structure will not represent the Binomial distribution dataset.Moreover, the DHNN − δ2SAT will give one satisfied interpretation of a first-order clause [54]; on the other hand, Y = 1 will give us second-order logic.It is important to emphasize that we do not consider a systematic δ2SAT logical system in this study.Table 7 shows the names of two δ γ 2SATρ types for different possible models depending on the two parameters Y and ρ, as well as other logic symbols.The negativity representation: The PON measure in the different logic models has been tested by Equation (20).The PON represents the probability of the appearance of a negative literal in the entire logic system in all combinations with different λ 1 .It is necessary to control the negative literals in order to determine the prevailing attributes in the dataset, as negative literals will ensure more negativity in the final neurons; then, we can ensure that the attribute will appear in the solution space by helping the DHNN find the optimal solution [24].
The Figure 4, a line representation, shows different layers of logic in different proportions for both types of δ γ 2SATρ.At the same time, for other groups, ρ = 0.5 for rSAT logic, and ρ = random for other logic systems (YRAN2SAT, MAJ2SAT, RAN3SAT, 2SAT, and RAN2SAT).The reason why this is in the minimum levels of the proposed logic for the δ γ 2SATρ is because, as was already noted, the probability of receiving a negative literal for the SAT is incredibly low.The highest two layers were recorded as ρ = 0.9 and ρ = 0.8 in both types of δ γ 2SATρ, respectively.By applying Equation ( 5), we obtain the best number of negative literals for all λ 1 , which is similar to the third layer for the other two groups, where ρ = 0.6 and ρ = 0.7 were the lowest probabilities in both types of δ γ 2SATρ, which, by the change in the proportional parameter ρ, indicates the success in terms of producing the desired number of negative literals in the logic system, representing the predominate attributes in our dataset.Additionally, there was a direct correlation between the number of neurons in each class of the desired proportion and the proportions where a high PON recorded low probability when the number corresponded to λ 1 .When the number is less than 17 and after 31 for λ 1 , the PON becomes approximately stable.This is because the d in Equation ( 6) in the sample size equation always selects the optimal sample that reflects the number of negative literals, even if the number of neurons is low.Table 8 provides detailed information on the PON in each proportion group for the two types of logic.Note that group (ρ = 0.9) recorded the maximum PON and highest mean value of the PON with low σ in both types of δ γ 2SAT 0.9 ; the small σ indicates a different number of neurons λ 1 , and this provides the nearest value of the PON means, and that result is highly similar within each group for all models and increases in accordance with the Y increasing in the models for both types, namely, δ 1 2SATρ and δ 2 2SATρ.the small σ indicate, with different number of neuron λ 1 , it provides the nearest value of PON mean's, and that result is highly similar within each group for all models and increases in accordance with the Y increasing in models for both types, namely, δ 1 2SATρ, δ 2 2SATρ.However, we can also note that the PON mean of the other logic systems is closest, indicating that the minimal PON value was recorded in YRANSAT with a minimum mean of 0.4966 and a low SD(σ) = (0.015), which indicates that it was also the lowest for different numbers of neurons, but we can also notice that the PON mean of other logic systems is closest, showing low values for different numbers of neurons that were less than or equal to 0.5.The PON results prove the flexibility of δ γ 2SATρ's structure in terms of controlling the literals' states.The accuracy of the models is evaluated by the NAE measure in Equation ( 21) in terms of the amount of error that is non-negative or the quantity of the negative literal status for the entire logic system in each proportion group for both types of δ γ 2SATρ models.According to Figure 5, in the line representation, note that the effect of the proportional changes in the logic restructure guarantees that the best RAN2SAT is required, or effective of the prevailing attribute in the dataset where different proportions give us different layers.The details of Figure 5 can be found in Table 9, which shows the minimum values of the NAE that were recorded in a group ρ = 0.9 where A4 in δ 2 2SAT 0.9 was recorded as the lowest error by (0.1429).It should be observed that the median value (0.3090) was the lowest possible value, indicating that the A4 for all n neurons of λ 1 always had a lesser error in the middle sections.Additionally, it should be noted that all models in the same group, A16, A12, and A8, have very similar median values (0.333, 0.31, and 0.320), which is because, as shown in the PON, this group has the highest probability for the representation of a negative literal, which is accomplished by the proportion ρ = 0.9.Similarly, in δ 2 2SAT 0.9 , Q4 recorded the lowest error as 0.1429, but the least median was recorded by Q16 (0.13125), which means the minimum error lies in the middle values with respect to the number of neurons λ 1 .Moreover, it can be noted from Figure 5 that for a small number of neurons λ 1 , Q4 has fewer NAE values than Q8, Q12, and Q16.However, the reverse is true for the middle values of Q16 compared to Q12, Q8, and Q4, as mentioned before regarding the effect of Y in λ 1 .However, in Table 9 the value of the median has very small differences from the model in group ρ = 0.9.As discussed in terms of the PON, this indicates the successfulness of the proportion of representation in the logic system.The highest NAE value was observed to be for rSAT with a high median, where r = 0.5 with the nearest value of NAE for the other logic systems (YRAN2SAT, MAJ2SAT, RAN3SAT, 2SAT, and RAN2SAT); as previously mentioned, there was a lack of representation of the negative literals in the logic system, as they recorded the least degree of the probability of the appearance of negative literals.The probability of full negativity of second-order logic: We examined the ability of several models incorporating the two types of δ γ 2SATρ to produce full-negativity second-order clauses with greater accuracy compared to other recently developed logic systems by manipulating two parameters, Y and ρ, using the FNAE measure for the second-order clause in Equation (22).Obtaining full negativity second-order logic guarantees that the prevailing attribute in the desired logic structure is represented.Figure 6, a columnar representation, shows the result of the FNAE measure, the higher accuracy achieved by A8 and A4 in δ 1 2SATρ, and Q4 in δ 2 2SATρ that obtained a value of (0) for FNAE.This is due to the effect of the two parameters in this model, for which the proportion of negative number is ρ = 0.9, with a lower probability than other models in second-order logic where Y = 0.6, 0.7, which means that all second-order clauses are satisfied by negative numbers because of the small representation of second-order clauses.Based on the same figure, the low accuracy obtained by A1 and Q1, which obtain the maximum number in terms of the FNAE logic (0.8930, 0.8650), is the reason for the low representation of the negative proportion in the logic system.Thus, if we need greater representation of the prevailing attributes in the desired logic structure, we should choose the A8 and A16 from δ 1 2SAT 0.9 and Q4 from δ 2 2SAT 0.9 .Model A4 recorded higher accuracy using the lowest value of the FNAE median (0.3995), which means the minimum error lies is in the middle values for all neuron quantities λ 1 .We also note the proportion of negative literals is ρ = 0.9, which means there are more second-order negative clauses in the models in δ 1 2SATρ recorded in model Q12, where the lowest FNAE median was (0.4147).The accurate results regarding the FNAE measure are listed in Table 10.It is evident that the ratios of the negative literals are ρ = 0.9 and Y = 0.9, indicating that the model has a higher fraction of negative, second-order representations.Comparing these results to those of other state-of-the-art logic systems, all of them provide low accuracy due to a higher median value, which indicates that the mean lacks the ability to accurately represent the full-negative second-order values in this model.RAN2SAT performs the best among the logic systems.The latest logic systems give higher errors because the fluctuation in predetermine for assigning second-order logic and low represent for negative literal that indicate the δ 1 2SATρ and δ 2 2SATρ is flexible more than the recent logic systems in controlling of two parameters.A high result in the WFNAE measure in Equation (23) indicates that full-negative second-order logic is more greatly represented.By using this scale, the weight of the sentences in the logic has been evaluated, and the Y parameter may be used to determine whether the model is desirable because the highest probability gives the highest weight.The maximum probability, as shown in Figure 7, is the highest weight represented and is obtained by A16, Q16 in δ 1 2SAT 0.9 , and δ 2 2SAT 0.9 , respectively, and 0 for YRANSAT, because it also produces first-order logic.In Table 11, note the highest significant median value was achieved by the A16 and Q16 models (0.4477 and 0.4691, respectively), and the lowest significant median value was achieved by the YRANSAT (0) WFNAE value.This would ensure that the prevailing attribute has the highest representation in our logic compared to other state-of-the-art logic systems, in addition to its ability to minimize and maximize changes in Y.In conclusion, it is evident that the two parameters, Y and ρ, have a direct impact on the probability distribution dataset in the δ γ 2SATρ logic structure.

Training Phase Capability
This phase's objective is to evaluate the efficiency of various δ γ 2SATρ structures produced in the probability logic phase, which were trained in a DHNN and minimize the logical inconsistencies using Equation (12), to obtain the correct synaptic weight.In this phase, ES obtained consistent interpretations for Θ δ γ 2SATρ and derived the correct synaptic weight for the logic system.If the model arrived at an inconsistent interpretation (E Θ δ2SAT = 0), the DHNN − δ2SAT model will reset the whole search space and generate a new one until φ = υ.The error of the maximum fitness of logic, which is represented by the total clause from the achieved fitness, is calculated by employing RMSEtrain and RMSEweight to quantify the error in the training phase via Equations ( 24) and ( 25), respectively.Figures 8 and 9 show different RMSEtrain, and RMSEweight results for both types of δ γ 2SATρ, when (υ = 100); for both types of δ γ 2SATρ, RMSEtrain was described to undergo an exponential increase (logistic growth) with a rate of growth equal to |F i − F desierd | and a linear positive increase in RMSEweight.According to [26], the error value in the training phase starts off low when the learning set is small because it is more difficult to fit the larger learning set.In this instance, as λ 1 rises, more iterations are required for the DHNN to locate SAT structures with satisfying interpretations, and the training phase metrics obtain 0 value when λ 1 is small.When the value of Y is high, there is always low error because the structure of second-order logic helps ES by becoming satisfied (F i = F desierd ) to a greater extent than first-order logic and because the probability of finding a consistent interpretation for each δ γ 2SATρ clause follows a binomial distribution, which measures the effect of flexible structure by changing in two parameters Y and ρ in terms of the RMSEtrain and RMSEweight results [24].As shown in Figures 8 and 9, high probability of obtaining second-order Y makes it easier to locate optimal interpretations [22], which means the WA method will derive the correct synaptic weight.On the other hand, when Y decreases, it signifies that the probability of the first-order clauses being satisfied is very low for 2SAT.Due to its limited number of interpretations, the non-systematic logical rule with first-order clauses reduces the cost function of the logic.8; in the line representation, it is noted for δ 1 2SATρ large RMSEtrain reported for A4 (118.895) that follows group Y = 0.6, have the smallest number for 2SAT at the same time, the result of the RMSEtrain median gives us the more significant result reported by group Y = 0.7, whereas A8(68.5274) has a large RMSEtrain value without any effect by outlier for all λ 1 ; thus, when Y decreases, the ES could not find a consistent interpretation for first-order logic.The low RMSEtrain median go for group Y = 0.9 were A14 (38.16665), which also indicates a large number of 2SAT that make it simpler for ES to achieve consistent interpretation.For δ 2 2SATρ a large error was reported for the Y = 0.6 group in Q1(114.342)because of a small number of 2SAT.
For the median result, we note that Q3(64.7599) reported a high RMSEtrain in the same group, and group Y = 0.9 reported a lower value with respect to Q16 (41.0488), which indicates it has the same behavior for δ 1 2SATρ; it is worth noting here that large Y and ρ have large fitness errors.It is clear in Q (4,8,12,16) that when ρ = 0.9 in both measures, that means it is difficult for ES to become satisfied for negative literals, because the extreme value for negative literal makes it difficult to achieve optimal fitness, as mentioned in [24].Due to the limited room for searching, it is challenging for ES to be applied to large Y in small λ 1 .Finally, the mechanism of ES in the training phase of DHNN is only effective when λ 1 is small and effected by a high number of neurons because of the non-randomized operator [24].The training phase can be improved further by embedding a learning algorithm in a DHNN and using global and local search operators [26].This approach may aid in the search for optimal Θ δ γ 2SATρ interpretations and ensures that logical inconsistencies are minimized.From Figure 9, column representation, the RMSEweight for two types δ γ 2SATρ models help to better understand the fitness of the neuron state.Based on the results, the value of 0 was obtained in different quantities of λ 1 in the interval [5,18] in different models in both types of δ γ 2SATρ; then, the values started to fluctuate at large λ 1 -the maximum RMSEweight values were reported for A7 and Q3, where the values of the negative literals were large (ρ = 0.9) and where λ 1 was large.In Table 13, which corresponds Figure 9, it is reported that the maximum RMSEweight values in terms of the median are A1(0.0791)and Q10(0.0548),wherein the ρ is small.In addition, a small result was reported for A16 (0.0075) and Q14(0.0048),where the negative numbers are large, which is clearly the result of the RMSEweight being affected by the fitness clauses that were measured by RMSEtrain, because the ES is could not find the interpretation for a clause with a high value of λ 1 then the DHNN could not derive the correct synaptic weight by WA methods when the result was more than zero.The fluctuation in the result is because the DHNN is selected the random number for weight if E Θ δ2SAT = 0 after the number of iterations φ reaches the maximum.In conclusion, it is evident that two parameters, Y and ρ, have a direct impact on the probability distribution dataset during the testing phases.

Testing Phase Capability
Optimal testing phase is achieved when E Θ δ2SAT = 0 retrieved optimal synaptic weight, after DHNN − δ2SAT completing checking clause satisfaction and generating optimal synaptic weight through the WA method.The final state of the neuron will converge towards the global minimum energy.It is important to evaluate testing phase because DHNN frequently produces similar final neuron states as opposed to novel final neuron states [55].Therefore, we compare the δ γ 2SATρ logic with the recent logic systems by global minima ratio matric.If the model is unable to reach a global solution, this indicates that it is trapped in a local solution, which makes it impossible to determine whether the proposed DHNN − δ2SAT is satisfied or not.
Figure 10, column representation, shows the global minima ratio results, calculated by Equation (26) for two types δ γ 2SATρ and state-of-the-art logic systems, without considering any optimizer, to assess the actual testing phase capability for of DHNN − δ2SAT.Where the optimal result for global minima ratio R G is 1, we can note in Figure 10 that all are capable of retrieving the optimal synaptic weight values in small λ 1 and then it decreases with large λ 1 , because the ES is unable to manage synaptic weight in the training phase and will be susceptible to retrieving non-optimal neuron states and ensnared in local minima.A model's ability to achieve maximum global minima ratio demonstrates that the suggested SAT is effectively integrated into DHNN.Maximum global minima ratio reported for YRAN2SAT, rSAT, and (A1, A11, Q11) models in δ γ 2SATρ.The reason for YRAN2SAT recorded the high global minima ratio for small λ 1 [26] because the flexibility in the structure offers an accurate result.Table 14 gives the Figure 10 numerical result, from the R G median results without effect from the outliers, note both type δ γ 2SATρ achieve near result to other latest logic systems.High median goes for MAJ2SAT because the structure of logic that (2SAT,3SAT) [23], also the fare literals state represent in rSAT [24] make it achieved highly R G .Based on Table 14, from R G median, there is a high effect for two parameters Y and ρ in δ γ 2SATρ, small λ 1 in the DHNN for small Y and ρ can retrieval the right synaptic weight such as (A1,Q1), but from median, the high Y and ρ achieved more global minimum than other such as A(13,14,15), Q (9,10,13,14).It can say the proposed models showcased the efficiency of δ γ 2SATρ to control DHNN as a symbolic structure that causes network convergence.Since the local field in Equation ( 15) drives the neuron's final state in accordance with the behavior of the second and first-order clause, it exhibits the same behavior as the non-systematic RAN2SAT structure presented by [20].The purpose of finding the RMSEenergy in Equation ( 27) is to calculate the difference between the final energy and the absolute minimum energy, as stated in condition Equation (18).indicates whether or not the solutions produced by DHNN − δ2SAT are optimal, it must assess the flexibility of δ γ 2SATρ by determining the value of RMSEenergy.Based on Figure 11 column representation, was reported to small λ 1 achieve less RMAEenergy value for all models, which indicates a successful convergence towards the optimal final neuron state, after which the final energy difference fluctuates as the number of λ 1 increased.This phenomenon occurs as a result of the decreased probability of receiving cost function E Θ δ2SAT = 0, as clear in RMSEtrain which leads to higher energy, and DHNN − δ2SAT's ineffective learning strategy.As the number of λ 1 increases, some synaptic weights become suboptimal, resulting in final neuron states stuck in local minimum energy.Additionally, Sathasivam [18] claims that during the DHNN testing phase, suboptimal neuron updates are what caused the local minimum energy to exist.Suboptimal neuron updates in this situation will result in more incomplete sentences, which raises the energy gap.When the logical formulation containing 2SAT was incorporated into DHNN − δ2SAT we said the δ γ 2SATρ behaved like the traditional non-systematic logical rule RAN2SAT.As shown in Figure 11 it can be observed that the adverse impact of negative literal with high number of λ 1 in logic where A4 Q12 recorded the highest value of RMSEenergy and A1 Q1 when number of λ 1 small is the opposite, from Table 15 gives from the Figure 11 the median of RMSEenergy gives us the accurate result where the small median go for A13, Q9 with low value parameter ρ and A8, A16 with high value parameter ρ give high RMSEenergy error.This demonstrates that when most neuron states are negative, tend to converge towards local minimum energy.In conclusion, it is evident that two parameters, Y and ρ, have a direct impact on the probability distribution dataset during the testing phases.

Similarity Index Analysis
For final neurons' quality states only compare both type of δ γ 2SATρ with RAN2SAT because δ γ 2SATρ consider the enhancement and developing for RAN2SAT, also, they have the same structure behavior, we tested the variation introduced by the testing phase for δ γ 2SATρ models and final neurons' quality state compared with RAN2SAT, where the degree of state redundancy for the DHNN model training phases is indicated by the similarity index of the final neuron state.A standard has been introduced indexing metrics, which is Sokal index, and consider the effective metric known as the ratio of total neurons variation R tv .
Firstly, consider the lower Sokal in Equation ( 30) in the similarity index matrices indicates that the final neuron states obtained are highly distinct to the benchmark states.According to Figure 12, a column representation to both types of δ γ 2SATρ reported low values, which imply higher more variety solution than other recorded by A16, Q16m but Q1, A5 recorded high value, due to parameter ρ.Table 16 translates Figure 12, which clarifies numerically, where the A16, Q16 reported low median value.Where all logic has the ρ = 0.9 and Y = 0.9 record low value, it indicates that there is more negative neuron and less first-order logic provides the final neuron state and the benchmark state distinction as shown in blue numbers in Table 16, Q, A (4,8,12,16).In other words, the low negativity and greater representation for first-order logic give us a high Sokal, as shown in Q, A (1,5,9,13) with a red number.Secondly, the effective parameter known as the ratio total variation of neurons R tv In Equation (31).From the Figure 13 its clearly a column representation to both types giving us different number of variation solution for different number of λ 1 because of the effect of two parameter Y and ρ in the training phase.The highly oscillation recorded for δ 1 2SATρ in 14 < λ 1 < 20 and 14 < λ 1 < 26 for δ 1 2SATρ models and the highest oscillation value is recorded for A16 in 17 < λ 1 < 20.For δ 2 2SATρ.The highly oscillation recorded in 14 < λ 1 < 26 for δ 1 2SATρ models and the highest oscillation value is traced for Q15 in 13 < λ 1 < 23.At the same time both type of δ γ 2SATρ models affected by a number of neurons, they start the ups and downs in different λ 1 according to the effect of two parameter Y and ρ.The total oscillation for some of models is rich to zero when λ 1 < 5, λ 1 > 39 such as A (1,3,4,5,8,10,12) and so low for others models δ 1 2SATρ, also Q1,Q4 when λ 1 < 5, λ 1 > 35 for δ 2 2SATρ, we can be said there are no significant variations for high than 37, also we can note here the effect of Y, it can't achieve the global solution for low Y because the ES will disturb the δ γ 2SATρ model in order to reach the optimal training phase (known as learn inconsistent interpretation), from Figure 10, global solutions acquired by δ γ 2SATρ models grow as λ 1 decrease as introduce previously.Table 17 for numerical result for Figure 13 note the effect of increase ρ where the logic has ρ > 0.7 recorded the highest high number of R tv , where the highest variation go for A16 (0.2149) and Q15 (0.2084), and also can see the δ 2 2SATρ record high R tv than δ 1 2SATρ in general, the reason her the δ 2 2SATρ give less number of first-order logic than δ 1 2SATρ for the same Y that was mention previously in Table 1, then the ES will deal with fewer numbers for first-order logic where it difficult to reach the optimal training phase.Moreover, Figure 13 presents the reason for the decrease when λ 1 increases the hard achieved global solution.It was observed that the RAN2SAT behave similarly to the δ γ 2SATρ, with a high R tv recorded of (0.1764) at the same time as its increase in the interval 13 < λ 1 < 42 and then decrease with a high λ 1 .The impact of the global minimum solution R tv is related to the number of neurons.As λ 1 rises, the probability of the number of global solutions reduces.We can conclude from the above results that R tv is related to the occurrence of other neuron states that lead to global minimum solutions in other domain adaptations [22].

Synaptic Weight Analysis
The mean is important because it signifies the location of the dataset's centre value, it contains information from each observation in a dataset.When a dataset is skewed or contains outliers, the mean may be untrue.We are utilizing various statistical tests to aid us comprehend the behaviors of synaptic weight to deduce information about the performance of logic in the training phases for further inquiry in synaptic weight distribution.The descriptive statistic of mean synaptic weight is a novel perspective in synaptic weight analysis, and we consider the mean of full logic to obtain a meaningful result in this analysis of our study by using the following formula: where W ri = ±0.5 synaptic weight for first-order logic, W rj = ±0.25 synaptic weight for second-order logic literals, W rjrj+1 = ±0.25 synaptic weight for second-order logic clauses.An example of the formula is shown as follows: Mean of δ2SAT = −.5+.5+(−.25−.25−.25)+(−.25+.25+.25)6 = 0.0833 (38) The center value located in a dataset is carries a piece of information from every observation in a dataset; accordingly the mean will give the information of the center value for all synaptic weight in logic, where they affect together in cos function on training phases, in this study the mean for 100 combinations been calculated in training phase as sampling size for each logic in both type of δ γ 2SATρ, so we have 100 individual results for the means that have the same characteristic in two parameters Y and ρ.It is worth noting that all means' values were tested first using appropriate tests that yielded significant p-values to ensure a correct outcome.The logic value signifies that features will be statistically defined by the curve of probability density function f (x), representing points and (boxplot and whiskers), denoted as (Raincloud Plot), and we want to achieve the following by using these figures: (a) The probability density function f (x) the curve will give an accurate result data behaviors (symmetric or skewness) so that we can determine if there is an outlier or if all value is distributed normality in the δ 1 2SATρ and δ 2 2SATρ logic (a normal bell curve indicating there is no outlier, and this logic has a high probability of achieving satisfaction in terms of Y and ρ).(b) The representing points the spread of mean values, while the boxplot and whiskers explain the amount of spread around the median, along with the details of an outlier from the median value given by whiskers sides.
This investigation will look at the impact of mean value analysis in evaluating the DHNN − δ2SAT during the training phase.We consider the highest λ 1 in each logic systems combination to calculate the mean, so we have λ 1 between 48,50 to obtain more accurate results.In the training phase, the synaptic mean value was determined using the ES effect to uncover inconsistent interpretations that offer us a basic understanding about the behavior of logic and achieving satisfied.δ 1 2SATρ probability function curve shows thin-tailed on two sides, so it is fairly to be a symmetric ship, which indicates that outliers are infrequent (an observation is considered an outlier if it differs numerically from the rest of the data), and the values for the mean tend to be normal for A (1,2,3,4), whereas the probability function curve Q3,4 are similar in behavior for δ 1 2SATρ.It is fairly to a symmetric ship, so it has thin tailed on two sides and rare outliers, but Q1,2 shows different results because it tends to be non-symmetric by the heavier tail on the left which means that there are a lot of outliers.This result will be supported by the boxplot and whiskers.When we look at interquartile ranges, IQR (the lengths of the boxes), the longer it is, the more dispersed the data are, and the shorter it is, the less dispersed the data are.It can be observed that the δ 1 2SATρ is highly dispersed from the median compared to δ 2 2SATρ since the IQR range is higher in A (1,2,3,4) than Q (1,2,3,4).In addition, in terms of outlier, when checking a box plot, an outlier is defined as a data point that lies outside the box plot's whiskers, the δ 1 2SATρ and δ 2 2SATρ have the approximate behavior of a huge outlier, but it can be noted that the δ 1 2SATρ has more outlier than δ 2 2SATρ because ES could not achieve inconsistent interpretation in the training phase due to the δ 2 2SATρ models structure that leads to a random value for synaptic weight.Finally, the boxplot clearly shows that the distribution is nonsymmetric for δ 1 2SATρ and δ 2 2SATρ, as previously explained (the distribution is symmetric when the median is in the center of the box and the whiskers are nearly the same on both sides of the box), in both logic systems.The reasons for these results are: In terms of Y parameter, the number of first-order logic that has a p(x m ) = 0.4 in logic value that pulls the logic curve to the sides because the suboptimal synaptic weight for first-order logic is clearly in the distribution tail and box-whiskers plot also δ 2 2SATρ has more 2SAT than δ 1 2SATρ for the same Y parameter, and that reflects in the spread of value in the boxplot, which is high in δ 1 2SATρ.This indicates a high variation between the mean values ES failed to find a consistent interpretation.In terms of ρ parameter, from the boxplot also, we can observe that the effect of ρ gives more negative synaptic weight, but we should also consider the value of W BB that was positive in clauses (¬r i ∨ r j ), (r i ∨ ¬r j ) and (r i ∨ r j ) that affected 2SAT clauses mean values, because it is noted in δ 2 2SATρ there is no effect for ρ, as mentioned previously, it has more 2SAT clauses than δ 1 2SATρ for the same Y.Therefore, the ES tend to obtain consistent interpretation that is reflected in the mean values of whole synaptic weight logic.Conversely, for δ 1 2SATρ the effect of ρ is clearer in the mean values, with most points of the values located on the negative side.
(b) When Y = 0.7 noted the following from Figure 15 as follows: The probability function curve for δ 1 2SATρ exhibits the same behavior for Y = 0.6, indicating that it is a symmetric ship with normal mean values.It has a thin tailed on two sides, so outliers are infrequent.For δ 2 2SATρ it is a little different, all Q (5,6,7,8) is symmetric.Then, the mean values tend to be normal and have a light tail, except for Q6, as we see in the curve, it is a fat tail, therefore there are a lot of outliers on both sides.The boxplot and whiskers tell the same story for Y = 0.6.When we look at the box side, we can see that the δ 2 2SATρ is highly dispersed from the median compared than the δ 2 2SATρ because the value of IQR is higher in A (4,5,6,7) than the Q (4,5,6,7).Moreover, in terms of an outlier, we can observe that the δ 1 2SATρ and δ 2 2SATρ both have the approximate behavior of a huge outlier, but the δ 1 2SATρ is more outlier than δ 2 2SATρ except for Q6.Most logic has an outlier and, at the same time, is a short box (which implies that high-frequency data tends to be more fat-tailed).Finally, from the boxplot, the non-symmetric shape in both for δ 1 2SATρ and δ 2 2SATρ can be seen clearly.The reasons for these results are justified as follows: In terms of Y parameter, the number of second-order logic clauses that have a p(x m ) = 0.3 is considered a bit high, especially in high λ 1 which generates E Θ δ2SAT = 0 that pulls the logic curve to the two sides because the suboptimal synaptic weight is clearly in the tail of probability curve distribution and boxplot-whiskers.For δ 2 2SATρ, it has more 2SAT clauses than δ 1 2SATρ, for the same Y parameter.This reflects in the spread of value in the boxplot at its highest more than in δ 1 2SATρ.Therefore, it shows a high variation between mean values because ES failed to find consistent interpretations.In terms of ρ parameter, boxplot in δ 1 2SATρ and δ 2 2SATρ are reflected in a negative synaptic weight value.Both models have the parameter of Y = 0.6, the spread of data affected by ρ in 2SAT clauses and it affects the value of the mean which tends to be positive, as we mentioned previously.Finally, as seen in the Q6, the reasons for right fat-tailed the number of high second-order logic sentences that generate suboptimal synaptic weight, resulting in positive mean values.(c) When Y = 0.8 is observed from Figure 16 as follows: Where the 2SAT clauses are the common clauses, for δ 1 2SATρ the curve shows semi normal ship in A (9.11,12) with a semi skewed in A10, the light tail on the two sides with less outlier is in all δ 1 2SATρ.On the other side, δ 2 2SATρ gives near result where Q (10,12) is fairly to be symmetric ship, the mean values tend to be normal, it has a thin-tailed in two sides, Q (9.11) tend to be non-symmetric, the light tail in the two sides with less outlier is on all δ 2 2SATρ.The boxplot and whiskers for δ 1 2SATρ and δ 2 2SATρ, is highly sparse from the median comparison because the IQR range is higher in A (9,10,11,12) than Q (9,11,12), and shorter in Q10.In the terms outlier, a box plot whiskers, the δ 1 2SATρ and δ 2 2SATρ have approximate behavior of huge outliers on both sides, but we can note the Q11 is more outlier on the left than others and Q9 is more outlier on the right.Finally, based on the boxplot, it clarifies both logic systems have non symmetric curves.The reasons for these results are justified as follows: In terms of Y parameter, the number of first-order logic clauses that have a small appearance probability that makes the range values of mean is high in the two previous Figures 14 and 15.It is clear here in these figures the δ 1 2SAT ρ , δ 2 2SAT ρ obtaining (0.5) synaptic weight is small, so most of the means value range is small that led to less spread curve line, on the other side, the high representation of 2SAT clauses makes the length of the box highest because the volatile in the mean values of 2SAT clauses it gives a different result depending on negative literal, where (¬r i ∨ r j ), (r i ∨ ¬r j ) and (r i ∨ r j ) have the mean values different from (¬r i ∨ ¬r j ) the effect also by ES algorithm searching and that effect in cost function in Equation (12), that pull the logic curve and boxplotwhiskers into sides, that reflects in the spread of values in boxplot its highest than in Y = 0.6, 0.7.In terms of ρ parameter, its high effect here, in boxplot in δ 1 2SATρ and δ 2 2SATρ is clearly in the range of values, most of it full in the negative side, more clearly in Q, A ( 11,12) because the mean values of full negative second-order logic clauses it is highest here as we clarify in FNAE matric.It is also noted for Q (9,10), A10 is in the positive side because the ρ is small therefore, the mean will be positive and ES algorithm searching tend to find inconsistent interpretation.This indicates the effect of the parameter ρ but A9 still has first-order logic, which makes the data spread in two sides with light tail.However, in Q10 and Q12, the tail because the extreme mean values that come from full negative clauses and first-order logic clauses.From this result, we can note the significance of the synaptic weight analysis; it gives a summary of the search space area for a specific algorithm in training phases, and it is clarified by the mean synaptic weight results, which give the center of search space (optimal) and the wide by the range of spread (suboptimal) from the previous result the mean synaptic weight gives a general perspective for the mechanism of ES algorithm in this search space.Thus, we can observe the behavior of working in this limited space, as well The δ 1 2SATρ probability function curve indicates that it is reasonable to be a symmetric shape in A (15,16), but A14 tends to be non-symmetric, with a thin tail in two sides, implying that outliers are infrequent.Whereas A13 is left-right skewed and heavy-tailed, which implies that there a lot of outliers on the left, but in δ 2 2SATρ, Q (13,14,16) is symmetrical.While Q15 tends to be nonsymmetric, they have a thin tailed on two sides, implying that outliers are infrequent.Moreover, Q14 is heavily tailed which indicates there is a lot of outliers, but Q13, 16 have light tails and outliers are infrequent.When we look at interquartile ranges, we can observe that δ 1 2SATρ, A (15,16) is considerably distributed from the median compared to A (13,14) because the IQR range is similarly high in δ 2 2SATρ.Meanwhile, Q (13,15,16) is highly dispersed from the median compared to Q (14) because the IQR range is highest in terms of an outlier.When reviewing box whiskers, the δ 1 2SATρ and δ 2 2SATρ have the approximate behavior of a huge outlier however, we can note that Q, A (13,14,) is more outlier than Q, A (15,16).Finally, from the boxplot, it is clearly the non-symmetric for δ 1 2SATρ and δ 2 2SATρ as we previously mentioned.The reasons for these results are justified as follows: In terms of Y parameter, the number of second-order logic clauses that have the smallest appearance, so the mean values are high, is clear in the δ 1 2SATρ, δ 2 2SATρ figures.Moreover, the majority of 2SAT clauses representing, make the spread in all length box highest in δ 1 2SATρ, δ 2 2SATρ because of the volatility in the mean of 2SAT clauses, as mentioned previously.This effect in the logic curve and pulling the logic curve into the two sides also for boxplot-whiskers, which reflect in the dispersion of value in boxplot is more than Y = 0.6, 0.7.In terms of the ρ parameter, it has a high effect as well, in the boxplot in δ 1 2SATρ and δ 2 2SATρ, it is clearly in the range of value, most of it fails in the negative side also more clearly in Q, A (15,16) because the mean of full negative second-order logic clauses is highest here.As we explain in the FNAE metric, for other logic A, Q (14,13) still has more first-order logic, which causes the mean spread in two directions and a heavy tail in Q14 and A13 due to the extrema value that occurs due to the full negative clauses and second-order logic clauses.
From this result, we can note the significance of the synaptic weight analysis; it gives a summary of the search space area for a specific algorithm in training phases, and it is clarified by the mean synaptic weight results, which give the center of search space (optimal) and the wide by the range of spread (suboptimal) from the previous result the mean synaptic weight gives a general perspective for the mechanism of ES algorithm in this search space.Thus, we can observe the behavior of working in this limited space, as well as the behaviors of obtaining a solution using optimal and suboptimal synaptic weights.The ES has a unique search space that is heavily influenced by the number of neurons and the structure of logic.

The Limitation of the DHNN-δ2SAT
One of the limitations of DHNN − δ2SAT in this study is that the proposed hybrid network DHNN only considers propositional logic programming.The DHNN is unable to embed other variant of logic, such as predicate logic, fuzzy logic, or probabilistic logic due to the nature of Hopfield Neural Network proposed by Pinkas [56] that are limited to symmetric connectionist network, as well as the DHNN's low storage capacity and the cost function proposed by Wan Abdullah (1992), which only considers bipolar neurons.Conversely, this study takes a number of neurons limits is less than 52 because of ES.Consequently, as we improve, will replace ES by metaheuristics such as Artificial Bee Colony Algorithm [57] and Election Algorithm [58].Despite DHNN flexibility, δ2SAT's the quality of solutions offered needed to be improved.We can increase the iterations numbers required in our simulations by increasing the number of learning.The proposed model may yield more variation neurons, less errors, and a global minimum solution with more iterations.

Summary
In this section, we provide a brief summary of the beneficial properties of the logical structure of the proposed model; moreover, we provide a simple summary of the most important accomplishments of the proposed logic system, clarifying the findings given in the Results and Discussion section with respect to the following points: (a) Probability logic phases were applied to introduce various models to address dataset-related requirements.Notably, one of the most significant advantages of δkSAT is that it can generate multifarious models by controlling parameters that are revealed from the dataset features in the logic system.It is a flexible logic system, but this is not discussed in this study.The parameters can be used to generate models of logic that can be systematic when p(x m ) = 0, transforming to 2SAT and when p(y m ) = 0, it becomes first-order logic or it can be highorder non-systematic when k = 3, and it can be SRAN3SAT for order k = 1, 2, 3 or k = 2, 3 or k = 1, 3 by adding a new parameter p(z m ).In this case, regarding the probability of third clauses, we consider the probability concept p(z m ) + p(y m ) + p(x m ) = 1, when p(y m ) = 0 and p(x m ) = 0, it becomes 3SAT.The main differences between δkSAT and other logic systems such as YRAN2SAT, RAN3SAT, and RAN2SAT, as well as other systematic logic systems such as 2SAT and 3SAT, are the factors of probability, wherein the dataset will choose the best structure by controlling the probability parameter; in addition, the terms of negative literals determined from the dataset and distributed in clauses depend on the probability parameter, whose two main features render the δkSAT unique.(b) The testing and the training phases were examined.By applying Equations ( 24) and (25) in the testing phase, the results show that the proposed model obtained optimal synaptic weight after checking the clauses' satisfaction.It also generated optimal synaptic weight through the WA method for the small number of neurons and high parameter values.Equations ( 26) and ( 27) in the training phase showed that the efficiency of the probability logic phase produced various logical structures in the DHNN compared to the current systems.(c) A novel analysis of the synaptic weight for DHNN − δ2SAT was introduced, which was termed the descriptive statistic of mean synaptic weight.Previously, there have been various statistical tests used to study the behaviors of synaptic weight to deduce information about the performance of a proposed logic system in the training phases.Whereas, in this study, the descriptive statistical method analyzed the synaptic weight distribution by obtaining the mean of the synaptic weight in the testing phase.(d) Notably, in the Results and Discussion section, the sample size in Equation (5) gives us the best number of negative literals for the desired logic needed to obtain satisfaction.Of particular significance are the models δ 1 2SATρ and δ 2 2SATρ, which have a high proportion (ρ = 0.9) and high probability (Y = 0.9) introduced by probability logic phases, have the best structure as clarified by the measures used in the study (PON, NAE, and FNAE), and tended to be the best models in the training and testing phases, which are also shown by the similarity index measures.This result is the opposite of that obtained by Zamri et al. [24], which concluded a value of r = 0.5 for negative literal works efficiently in the logic phase and yielded a better structure than (r = 0.1, 0.9).The reason behind these contrary findings is that the proportion is dependent on the value in Equation ( 6), which gives a margin of error dependent on the Z value; additionally, there is the probability of second-order logic Y drawn from the dataset, which affected the δ γ 2SATρ models-all these factors rendered it the best in terms of logic structure.(e) In this study, the probability distribution from the contributed data set successfully generated an efficient, new logical structure for a DHNN.The discussion section considered the introduction of the comparative analysis of the δ2SAT with other existing SATs, for which the proposed model was superior in several aspects, as shown in Table 18.

Conclusions and Future Work
It is critical to create a non-systematic logical framework in a DHNN, employing parameters conducive to building a flexible final neuronal state.This study introduced a new probability logic phase that assigns the probability of the first-and second-order clauses and the desired negative literals appearing in each sentence, which helped to address the requirements of datasets.Statistical tools govern the creation of Θ δ2SAT during the probability logic phase.The novel logic probability phase of the proposed δ2SAT model provides a new enhancement with which to shape the logic structure according to the dataset, for which it was found that these models have high values in two parameters (Y = 0.9, ρ = 0.9) of two δ γ 2SATρ types, which introduced efficient logic structures in the probability logic phase.The new logic was embedded in the DHNN − δ2SAT by reducing the logical inconsistency of the corresponding zero-cost function's logical rule.The achieved cost function that corresponds to satisfaction was used to calculate the synaptic weight of the DHNN's effectiveness with a δ2SAT logical structure, which was examined using three proposal metrics in comparison with state-of-the-art methods, such as 2SAT, MAJ2SAT, RAN2SAT, RAN3SAT, YRAN2SAT, and rSAT.The final neuron state was assessed based on various initial neuron states, statical method parameters, and various metric performances, such as learning errors, synaptic weight errors, energy profiles, testing errors, and similarity metrics, which were compared with existing benchmark works.To further demonstrate the efficiency and robustness of the proposed Θ δ2SAT , it was validated using four different second-order probability distributions with four different proportions of extensive simulations.Further, a new prospective logical investigation was introduced in this study, which consisted of the analysis of the mean of synaptic weight for DHNN − δ2SAT to evaluate the existence of a flexible logical structure.The findings demonstrated that the proposed δ2SAT was successful in achieving a flexible logical structure with a prevailing attribute dataset compared to other state-ofthe-art SAT.For future work: (1) A metaheuristic analysis of the probability logic phase would aid the selection of the negative literals' positions in a logic system.(2) A metaheuristic analysis of the training phases would aid the satisfaction of Equation ( 12).(3) A metaheuristic analysis of the testing phases would aid the generation of a vast range of space solutions.(4) Synaptic weight analysis can be applied in the training phases to address the effects of the energy function and global solutions on the synaptic weight.Moreover, we can add the measure of variability to address the deviation in the results.Notably, the robust architecture of ANNs integrated with our proposed logic would serve as a good foundation for real-life applications such as Natural Disaster prediction.In this context, each neuron would represent the attributes from the data, such as rainfall trends, river levels, and drainage and ground conditions.These attributes will be embedded into the logic-mining approach proposed by [45], which will lead to the formation of induced logic, which, in turn, has predictive and classificatory abilities.In other developments, the proposed logic system would be indispensable in finding the optimal route in the Travelling Salesman Problem.
(a) To formulate a novel logical rule called S-Type Random k Satisfiability, where k = 1, 2 and statistical tools are integrated to structure first-and second-order logic in order to select the most suitable number of negative literals.(b) To propose a probability logic phase to determine the probability of the appearance of the number of the first-and second-order literals and the distribution of the desired number of negative literals on every clause by considering the selected dataset.(c) To implement the proposed S-Type Random 2 Satisfiability as a symbolic structure in the Discrete Hopfield Neural Network by reducing the logical inconsistency of the corresponding zero-cost function's logical rule, as well as determine the synaptic weight of the DHNN that achieves the cost function equivalent to the satisfied δ2SAT.(d) To compare the effectiveness of δ2SAT with respect to producing the appropriate logical structure during the probability logic phase before training in the Discrete Hopfield Neural Network by using three proposal metrics in accordance with the existing benchmark works.(e) To examine the capability of the proposed δ2SAT under the current logical rules with respect to the training and testing phase, demonstrate synaptic weight management, and ascertain the quality of neuronal states' efficiency in the DHNN via well-known performance metrics.(f) To investigate the proposed δ2SAT system's structural behavior during the training phase and thereby demonstrate the flexibility of this logical structure by using a novel form of analysis-synaptic weight analysis-via the mean of the synaptic weights.

Figure 1 .
Figure 1.Block diagram of the proposed S-type Random 2 Satisfiability logic Θ δ2SAT .

Figure 2 .
Figure 2. Schematic diagram of 2 DHNN SAT  −for both types of logic; the total of literal is n for first-second-order logic.

Figure 2 .
Figure 2. Schematic diagram of DHNN − δ2SAT for both types of logic; the total of literal is n for first-second-order logic.

Figure 4 .
Figure 4. PON line representation for models in both types of logic (a) 1 2SAT 

Figure 5 .
Figure 5. NAE line representation for models in both types of logic (a) 1 2SAT 

Figure 6 .
Figure 6.FNAE column representation for models in both types of logic (a) 1 2SAT 

Figure 7 .
Figure 7. WFNA column representation for models in both types of logic (a) 1 2SAT 

Figure 8 .
Figure 8. RMSEtrain line representation for models in both types of logic (a) 1 2SAT 

Figure 9 .
Figure 9. RMSEweight column representation for models in both types of logic (a) 1 2SAT 

Figure 10 .
Figure 10.column representation for models in both types of logic (a) 1 2SAT 

Figure 10 .
Column representation for models in both types of logic (a) δ 1 2SATρ, (b) δ 2 2SATρ and recently developed logic systems.

Figure 11 .
Figure 11.RMSEenergy column representation for models in both types of logic (a) 1 2SAT 

Figure 12 .
Figure 12.Sokal column representation for models in both types of logic (a) 1 2SAT 

Figure 13 .
Figure 13.column representation for models in both types of logic (a) 1 2SAT 

Author Contributions:
Conceptualization, methodology, software, writing-original draft preparation, S.A.; formal analysis, validation, N.E.Z.; supervision and funding acquisition, M.S.M.K.; writing-review and editing, G.M.; visualization, N.A.; project administration, M.A.M.All authors have read and agreed to the published version of the manuscript.Funding: This research was supported by Ministry of Higher Education Malaysia for Transdisciplinary Research Grant Scheme (TRGS) with Project Code: TRGS/1/2022/USM/02/3/3.Data Availability Statement: Not applicable. τ

Algorithm 1: Pseudocode for generating the probability of logic phases Input: λ
m , ρ, p(y m ), Set of r i

Algorithm 2: Pseudocode of DHNN − δ2SAT Begin Probability logic phase
Initialized Θ δ2SAT ; Training phase do According to Equation (12), minimize cost function; Use WA method to calculate Synaptic weight and store it in CAM; According to Equation (19), calculate global minimum energy H min End According to Equation (17), calculate the final neuron energy; By using Equation (18), confirm global or local minimum energy; Recognize global or local minimum solutions; Global minima solutions Else Local minima solutions End

Table 4 .
List of parameters used in DHNN − δ2SAT experimental setup.
s Smallest of absolute values of the sum of x i in Wilcoxon test W Wilcoxon test value (sum of smallest and largest absolute values of the sum x i )

Table 7 .
The logical symbols in the experiment.

Table 8 .
PNO results for models with both types of logic, δ 1 2SATρ and δ 2 2SATρ, and recently developed logic systems' details determined by Wilcoxon test for median divided by ρ value.
Note: The yellow highlights indicate the highest number in the column and the green indicates the smallest number in the column.Mathematics 2023, 11, x FOR PEER REVIEW 6 of 23

Table 9 .
Maximum and minimum NAE results for models with both types of logic δ 1 2SATρ, δ 2 2SATρ, and recently developed logic systems with details determined by Wilcoxon test for median divided by ρ value.
Note:The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00), for all models in terms of Wilcoxon test, which means that H 0 should be rejected.

Table 10 .
Maximum and minimum FNAE results for models in both types of logic δ 1 2SATρ, δ 2 2SATρ, and recently developed logic systems with details determined by Wilcoxon test for median.The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 .

Table 11 .
Maximum and minimum WFNAE results for models in both types of logic δ 1 2SATρ, δ 2 2SATρ and recently developed logic systems with details determined by Wilcoxon test for median.
Note:The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 .

Table 12
records the values in Figure

Table 12 .
Maximum and minimum RMSEtrain results for models in both type of logic δ 1 2SATρ, δ 2 2SATρ details by Wilcoxon test for median.
Note:The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 .

Table 13 .
Maximum and minimum RMSEweight results for models in both types of logic δ 1 2SATρ, δ 2 2SATρ and details by Wilcoxon test for median.The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 . Note:

Table 14 .
Maximum R G results for models in both type of logic δ 1 2SATρ, δ 2 2SATρ and RAN2SAT details by Wilcoxon test for median.
Note:The Yellow highlighted to indicate the highest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 .

Table 15 .
Maximum and minimum RMSEenergy results for models in both type of logic δ 1 2SATρ, δ 2 2SATρ details by Wilcoxon test for median.The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 . Note:

Table 16 .
Maximum and minimum Sokal results for models in both types of logic δ 1 2SATρ, δ 2 2SATρ and RAN2SAT details by Wilcoxon test for median.

:
The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column, (p-value < 0.00) for all models in terms of Wilcoxon test, it means reject H 0 .

Table 17 .
Maximum and minimum R tv results for models in both types of logic δ 1 2SATρ, δ 2 2SATρ and RAN2SAT.The results highlighted in yellow indicate the highest number in the column and the green indicates the smallest number in the column. Note:

Table 18 .
A summary of comparative analysis between δ2SAT and other SATs.
Smallest of absolute values of the sum of x i in Wilcoxon testWWilcoxon test value (sum of smallest and biggest of absolute values of the sum x i ) s