Next Article in Journal
Applied Cleaning Methods of Oil Residues from Industrial Tanks
Next Article in Special Issue
Artificial Immune System in Doing 2-Satisfiability Based Reverse Analysis Method via a Radial Basis Function Neural Network
Previous Article in Journal
The Effect of Geometrical, Operational, Mixing Methods, and Rheological Parameters on Discharge Coefficients of Internal-Mixing Twin-Fluid Atomizers
Previous Article in Special Issue
The Integration of Collaborative Robot Systems and Their Environmental Impacts
 
 
Article
Peer-Review Record

Election Algorithm for Random k Satisfiability in the Hopfield Neural Network

Processes 2020, 8(5), 568; https://doi.org/10.3390/pr8050568
by Saratha Sathasivam 1,*, Mohd. Asyraf Mansor 2, Mohd Shareduwan Mohd Kasihmuddin 1 and Hamza Abubakar 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Processes 2020, 8(5), 568; https://doi.org/10.3390/pr8050568
Submission received: 29 January 2020 / Revised: 28 March 2020 / Accepted: 7 April 2020 / Published: 11 May 2020
(This article belongs to the Special Issue Neural Computation and Applications for Sustainable Energy Systems)

Round 1

Reviewer 1 Report

The paper addresses a very hot topic in the filed.  

Given my limited background on this topic, I would have perhaps more than others liked a crisper discussion of the novelty of the work with respect to the exiting works in particular highlighting the differences with the other considered models in terms of applicability, future exploitation and research perspective. I suggest just to broaden the discussion relating to the comparison with other methods, perhaps adding some future perspective on the use of the model itself. However, the paper is interesting, self-contained, well organized and well written, so I think it deserves to be accepted for publication in Processes 

Author Response

Hamza Abubakar

School of Mathematical Sciences

7/03//2020

The Editor,

Neural Computation and Applications for Sustainable Energy Systems

Dear editor,

REVISION AND RESUBMISSION OF MANUSCRIPT

We would like to thank the reviewer for a careful and thorough reading of this manuscript and for the thoughtful comments and constructive suggestions, which help to improve the quality of this manuscript. Titled “MODIFIED ELECTION ALGORITHM IN ACCELERATING THE PERFORMANCE OF HOPFIELD NEURAL NETWORK FOR RANDOM kSATISFIABILITY. We have carried out all necessary the corrections, amendments and revised the manuscript accordingly as instructed by reviewers.  We found that the general comments have helped us to improve the clarity and quality of the manuscript. Thus, the presentation has been improved entirely after the amendment. 

Reviewer 2 Report

MODIFIED ELECTION ALGORITHM IN ACCELERATING THE PERFORMANCE OF HOPFIELD NEURAL NETWORK FOR RANDOM kSATISFIABILITY

Election Algorithm is a metaheuristics model. The authors utilize a hybridized EA assimilated with the Hopfield neural network (HNN) in carrying out random logic program (HNN-R2SATEA) and compare it to existing traditional exhaustive search (HNN-R2SATES) model and the recently introduced HNN-R2SATICA model. The authors demonstrate that their hybrid model outperformed other existing model based on the Global Minima Ratio (ZM), Mean Absolute Error (MAE), Bayesian Information Criterion (BIC) and Execution Time (ET).

The article is written in a demonstrative way. All the premises are clearly explained, and a pseudocode of the Election algorithm is provided. The implementation method of HNN-R2SATES, HNN-R2SATEA & HNN-R2SATICA are clearly described.

From the results obtained from simulations, it appears that the proposed HNN-R2SATEA model is the more improved and robust heuristic compared to HNN-R2SATICA and HNN-2SATES in accelerating the learning phase of HNN in carrying out a random-2SAT logic program. The EA reveals itself to be more more robust to boost the training phase of HNN for random 2SAT logic program and it is closest towards the global minimum, irrespective of the NN release into the HNN.

The research work described in the paper and the results obtained are really interesting nevertheless a final recommendation from the Referee’s side will be made after the following mandatory requests will be fulfill:

- Mandatory request 1

The authors have to provide the code, under a GitHub repository for example. Indeed, the referee would like to reproduce the results presented in the paper.

- Mandatory request 2

The authors have to provide all the training set and test set used for their demonstration under a .csv format.

Author Response

Hamza Abubakar

School of Mathematical Sciences

7/03//2020

The Editor,

Neural Computation and Applications for Sustainable Energy Systems

Dear editor,

REVISION AND RESUBMISSION OF MANUSCRIPT

We would like to thank the reviewer for a careful and thorough reading of this manuscript and for the thoughtful comments and constructive suggestions, which help to improve the quality of this manuscript. Titled “MODIFIED ELECTION ALGORITHM IN ACCELERATING THE PERFORMANCE OF HOPFIELD NEURAL NETWORK FOR RANDOM kSATISFIABILITY. We have carried out all necessary the corrections, amendments and revised the manuscript accordingly as instructed by reviewers.  We found that the general comments have helped us to improve the clarity and quality of the manuscript.  This paper is part of ongoing research that the efficiency of our proposed model needs to be further ascertained by making a comparison with other metaheuristics such as Swarm Intelligence (ABC, AIS, ACO etc).  This paper focused mainly on simulated data sets, not real-life data paper.  The program will still further be extended to cater reverse analysis in the real-life data set.  The program has copyright under university research grand that can be shared after the completion of this project. However, the result generated based on the Dev C++ program is included for your consideration and further necessary actions.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper proposes the use of the modified election algorithm in HNN for the task of k-satisfiability. The paper has its own contributions and merit. However, I found the paper to be poorly written and organized. The manuscript contains numerous grammatical errors. I would like the authors to revise the paper thoroughly and address the following issues. I would be able to review the revised paper.

Section 0: Abstract

0.1 …, clearly proven…, …outcome portrays.. use better word choice?

Section 1: Introduction

1.1 There are too many irrelevant information in this section. I feel like the manuscript is written for a thesis, and not for journal publication. The detailed information contained in this section is more suitable for a thesis. For journal publication, the writing needs to be concise and straight to the point.

1.2 A sentence should not start with a reference. [19].

1.3 The novelty and contribution of the work needs to be highlighted.

1.4 A paper organization needs to be added at the end of this section.

Section 2:

2.1 This section, together with other sections (3 – 10), can be rewritten in a more concise form. I found the information contained in this section to be a repetition of the concepts that have been discussed in section 1.

Section 3:

3.1 A proper introduction and background of HNN should be included.

3.2 A literature review on the applications of HNN in various problem domains needs to be included.

3.3 Line 169: spacing

3.4 The flow of the paper halts completely without mentioning any problem that HNN can solve. Hence, Equations (3)-(6) have been created out of thin air. Kindly address this issue.

3.5 What is the optimization problem that the authors intend to solve?

3.6 How is the proposed HNN is different from the original Hopfield and Tank 1985?

3.7 The issue of HNN getting stuck at local minima needs to be elaborated.

Section 4:

4.1 This short section can be combined with other aforementioned sections.

4.2 Equation (7) is clearly a Horn Formula. What is the purpose of showing this Horn formula but the focus of this work is on Random SAT?

4.3 Excessive usage of non-important mathematical formulation such as Equation (8). The author should revise Equation (8) because B is a clause that make C as a goal of the equation.

Section 5:

5.1 How do you relate / deduce the Equation (8) into Equation (9)?

5.2 Equation (10) is not typeset properly.

5.3 The three components of logic program has been repetitively written many times. 

Section 6:

6.1 I think this is the main idea/novelty that the author would like to optimize in the first place. The idea of having newly proposed RandomSAT is great but this section is poorly written.

6.2  Justify what is “valid clause”?

6.3 According to your cost function, the value of the cost function is always 0. How do you guarantee that your proposed RanSAT will not fall to UNSAT category (since the author specifically testify the usage of UnSAT).

6.4 The unnecessary usage of mathematical term of limit in Equation (12). What does x represent? Pr is not well defined. What is r? probability? If yes, why does the value can be more than 1?

6.5 Equation (13) clearly does not resemble your equation in equation (9)-(11). Kindly propose a new equation that represents random-2SAT.

6.6 “Comparatively, a high number of neurons (atoms) per number of clause would increase the probability of a number of neurons beings satisfied [18-19, 28].” According to my reading, none of these citations made the above claim.

6.7 I am very confused with the reason why equation (13) should be expanded into Equation (14)-(16). What purpose does it serve?

6.8  Acording to Equation (15), the minimum value of E_QRkSAT=1 but your claim defined otherwise E_QRkSAT=1.

6.9 Overall this part requires major improvement.

Section 8:

8.1 The author seems to be confused between discrete and continuous EA algorithm. I appreciate the author’s effort in explaining the analogy of the EA. The “real and informative” explanation given in this section is very poor and requires major revision.

8.2 Eq (23) and (24) are not typeset properly.

8.3 The author claim EA as a minimization problem but the Equation (21) depicts the objective of the system to maximize the fitness function. Contradiction?

8.4 Equation (22) totally contradicts your proposed network. What happen if C=0? The accumulated fitness value will be zero.

8.5 What is the difference between fitness in Equation (22) and (24)? Why?

8.6 In stage 1, p is defined as a bipolar (1,-1) representations but as soon as we arrive to the next sentence, the bipolar representation became a real number p=1,2,…,P. Which one is which?

8.7 The author seems to be confused between the difference between scalar and vector state. Equation (27) is a cross product and it is intended to be a normal scalar multiplication.

8.8  There is no link between Equation (28) and (29). The equation distance is created out of thin air. There is no vector to scalar transformation whatsoever.

8.9 Similar to equation (31), Xs is a scalar and the author treats it as a cross product of the vector.

8.10 In equation (32), the author repeats the same issue. The variable M is created out of thin air. There is no connection between the previous stage.

8.11 What is the difference between equation (34) and (29)?

8.12 How coalition actually works? Analogy aside, how can your solution improved by using coalition? What mathematical formulation is involved?

8.13 In stage 5, what is the termination criteria? Again, author tries to convince the reader with analogy without any mathematical basis and justification.

8.14 Figure 1 tell us very little about your work. Add the relevant equations in the lines accordingly.

Section 9:

9.1 This section is lack of appropriate citations related to the ICA. Please add to increase the depth of this section.

9.2 Is the ICA being used in the training or the testing phase of the network?

9.3 Kindly justify the reasons behind selecting ICA to be compared with Election algorithm? Any special operator or features?

9.4 The section on the ICA is poorly written and doesn’t reflect the modified version to be hybridized with HNN. Please revise thoroughly.

9.5 Please justify the significance of Equation 41.

9.6 Kindly improve the symbol of Pi for the line before Equation 42.

9.7 What is equation 42? The usage or meaning of the formula is not written.

9.8 Please explain Equation 49 and kindly mention the impact of this Equation to the results at the end of the simulations.

9.9 You have mentioned in Stage 1 that random-kSAT is continuous and non-constrained? Should it be discrete and constrained?

9.10 Please rewrite the Stage 1 until Stage 6 with proper mathematical formulations and correct problem. (You are dealing with discrete not continuous)

9.11 MAJOR CONCERN: Please remove unnecessary equations and kindly rearrange the important formulas of ICA in the correct order. Please rewrite your pseudocode and make sure all the important operators are included.

Section 10:

10.1 This short section should NOT be a standalone section. It should be absorbed in perhaps the methodology section.

Section 11:

Line 597: extra comma

Section 12:

12.1 The discussion only discusses about the trend of the result. Minimal effort has been done by the author to discuss how their proposed method has the optimization property.

12.2 The usage of BIC is not appropriate in this experimental setup because your dataset is limited to simulated dataset.

12.3 Again, the author seems to focus more on the analogy rather than justify the usage of the proposed method.

12.4 EA should be compared with other state of the art algorithm such as GA, HS, ABC, ACO and etc.

12.5 How can the coalition process avoid the solution to be trapped in local minima? Please justify. How would it affect the time in particular?

12.6 The last sentence in your result and discussion “ Hence, the individual created by EA achieved global minima swiftly compared to ICA and ES searchin methods.” is rather confusing and insignificant. Kindly revise.

12.7 Please redraw the graphs especially Figure 6 and 8.

12.8 The trends should be described thoroughly in order to provide in depth results representation.

12.9 What is the differences between “brute-force” and “exhaustive search”? I noticed these terms are widely being used in your work to explain the same reasons. Please be consistent and use the correct term.

12.10 How does the MSE accumulation penalize BIC? Kindly elaborate further.

12.11 What is optimally satisfied clauses? How does it relate to the Zm?

Section 13:

13.1 What do you mean by “more improved and robust heuristic” ? It should be something that can be measured quantitatively.

13.2 This section contains too any grammatical error and bad sentence structures.

Author Response

Hamza Abubakar

School of Mathematical Sciences

07/03//2020

The Editor

Neural Computation and Applications for Sustainable Energy Systems

Dear editor

REVISION AND RESUBMISSION OF MANUSCRIPT

We would like to thank you for giving us the opportunity to re-submit a revised draft of our manuscript titled“MODIFIED ELECTION ALGORITHM IN ACCELERATING THE PERFORMANCE OF HOPFIELD NEURAL NETWORK FOR RANDOM kSATISFIABILITY. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful to the reviewers for their insightful comments on our paper.  We have been able to incorporate changes and revised the manuscript to reflect most of the suggestions provided by the reviewers. We found that the general comments have helped us to improve the clarity and quality of the manuscript. Thus, the presentation has been improved entirely after the amendment.  The responses are tabulated below.

COMMENT

CORRECTION

SECTION 0: ABSTRACT

RESPONSES

0.1 …, clearly proven…, …outcome portrays.. use better word choice?

Thank you very much for your comment. We have revised this section accordingly

SECTION 1: INTRODUCTION

RESPONSES

1.1 There are too much irrelevant information in this section. I feel like the manuscript is written for a thesis, and not for journal publication. The detailed information contained in this section is more suitable for a thesis. For journal publication, the writing needs to be concise and straight to the point.

Thank you very much for your comment on our manuscript.  We have revised and rewritten this section in more concise and straight to the point accordingly.

 

1.2 A sentence should not start with a reference. [19].

Thank you very much for the correction.  The sentence has been written according.

1.3 The novelty and contribution of the work needs to be highlighted.

Thank you very much for your comment. The novelty and contribution of this paper have been added in the manuscript accordingly.

1.4 A paper organization needs to be added at the end of this section.

Thank you very much for your comment. A paper organization has been added accordingly.

SECTION 2:

RESPONSE

2.1 This section, together with other sections (3 – 10), can be rewritten in a more concise form. I found the information contained in this section to be a repetition of the concepts that have been discussed in section 1.

Thank you very much for your comment. This section has been revised accordingly.

SECTION 3:

RESPONSE

3.1 A proper introduction and background of HNN should be included.

Thank you very much for your comment in our manuscript. A proper introduction and background of HNN have been added accordingly.

3.2 A literature review on the applications of HNN in various problem domains needs to be included.

Thank you very much for your comment. A review on the applications of HNN in various problem domains has been included accordingly.

3.3 Line 169: spacing

Thank you very much for the correction.  Line spacing has been adjusted accordingly. 

3.4 The flow of the paper halts completely without mentioning any problem that HNN can solve. Hence, Equations (3)-(6) have been created out of thin air. Kindly address this issue.

Thank you very much for your comment. An optimization problem that can be solved using HNN has been added in the manuscript. Equation 3. Is the example of the derived cost function that is associated with the negation of all random kSAT clauses. (it is the inconsistency of the logical clause in equation (2). Eq. (3) is the condition of satisfiability of equation (2). Equation 3 and 4 are HNN local field equation used in updating the state of the neurons state.

Equation 6. is the  HNN energy dynamics for the network decrease monotonically and is vital for the convergence of the network towards an optimal solution. 

Note that, from an optimization point of view, the focus is not the derivation and analysis of HNN local field or energy equations rather adopting them due to the convergence ability HNN energy function towards a state corresponds to a minimum solution.

Feasibility and order-preservation of the HNN energy function imply that the network will tend to find an optimal feasible solution for a given instance of the combinatorial optimization problem.  

The derivation and analysis of equ. (4)-(6) were first introduced by Little in 1974 and further popularized by Hopfield 1982, 1984 and 1984.  The convergence analysis is still subject to further studies.

3.5 What is the optimization problem that the authors intend to solve?

Thank you very much for your comment. This paper focuses on optimizing random satisfiability problem which is one of the discrete combinatorial optimization problem [48-50]. The satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence.  the satisfiability problem (SAT) has been treating an as an optimization problem in the mathematical optimization.  Some focused on formulating the satisfiability problem as non-convex, exact, exterior, penalty-based problem with a coercive objective function.  The method focused on random satisfiability (SAT), which is NP-complete. The method falls into the category of approximation schemes for solving the satisfiability problem and is sub-optimal and partially heuristic in nature. In this paper, we treat the satisfiability as a representation problem. We still focus on random kSAT representation which  demonstrated the ability to represent real life and industrial application applications [41-42, 46-50].

 

3.6  How is the proposed HNN is different from the original Hopfield and Tank 1985?

Thank you very much for your comment. The HNN utilized in this paper is the same as the original HNN in 1982 and 1985.  In our paper, we utilized the HNN energy function due to its ability to implement a minimization problem. Hence, feasibility and order-preservation of HNN energy function imply that the network will tent to find an optimal feasible solution space for the given instance of the combinatorial optimization problem( TSP, SAT, vertex cover problem etc).

3.7 The issue of HNN getting stuck at local minima needs to be elaborated.

 

Thank you very much for your comment. The energy function of a HNN has many local minima. Consequently, the network probably will reach an equilibrium state that does not correspond to a problem solution. The search for evolution strategies (GA, ICA, EA etc)  to move the network out of local minima and take it to a global minimum is an important task in this field. The problem of HNN getting stuck at local minima is s subject extensive research study both analytically and experimentally.

SECTION 4:

RESPONSES

4.1 This short section can be combined with other aforementioned sections.

Thank you very much for your comment. This section has been combined with the aforementioned accordingly.

4.2 Equation (7) is a Horn Formula. What is the purpose of showing this Horn formula but the focus of this work is on Random SAT

Thank you very much for your comment manuscript. Equation (7) and (8) are examples of logic program. We feel that they are not important and have been replaced with Random SAT from the manuscript accordingly  

4.3 Excessive usage of non-important mathematical formulation such as Equation (8). The author should revise Equation (8) because B is a clause that make C as a goal of the equation

Thank you very much for your comment. Eq.(7) and (8) have been revised and replaced with random kSAT accordingly.

SECTION 5:

RESPONSES

5.1 How do you relate / deduce the Equation (8) into Equation (9)?

 

Eq. (8)  is a logic program  that is has been converted into boolean algebra form in eq. (9). It has been presented in the methodology section. For example, given any logic program: , convert all the random-kSAT logical clauses into Boolean algebra: .

However, this section has been revised in a more simplified manner to focus on randomkSAT accordingly. We thank you very much for your powerful comment on our manuscript.

 

5.2 Equation (10) is not typeset properly.

Thank you very much for your comment. The proper typesetting of eq. (10) has been observed accordingly.

5.3 The three components of the logic program has been repetitively written many times. 

Thank you very much for your comment.  This section has been revised accordingly.

SECTION 6:

RESPONSES

6.1 I think this is the main idea/novelty that the author would like to optimize in the first place. The idea of having newly proposed RandomSAT is great but this section is poorly written

Thank you very much for your comment.  This section has been revised accordingly.

6.2  Justify what is “valid clause”?

 

Thank you very much for your comment. “valid clause” we are referring to possible clauses within the Boolean formula for the appropriate value of k  for random (2SAT and random 3SAT)  respectively. However, we feel the ward “valid clause” is not appropriate.  We revisit the sentence accordingly.

6.3 According to your cost function, the value of the cost function is always 0. How do you guarantee that your proposed RanSAT will not fall to UNSAT category (since the author specifically testify the usage of UnSAT

Thank you very much for your comment. The focus on this paper is on the optimization of random kSAT. Our focused is to minimize the logical inconstancy based energy minimization capacity of HNN that the solution achieved by the network will always be feasible. The cost function can ONLY BE ZERO WHEN ALL THE CLAUSES ARE SATISFIED.  The minimum value for cost function is 0, corresponding to the fact that all the clauses are satisfied. The value for cost function (which is an integer) is proportional to the number of clauses unsatisfied. The SAT or UNSAT of Boolean formula depend on the methods or pattern of generation. However, there is different method to generate random SAT that are surely SAT or UNSAT . Achlioptas et al. introduced a generator of satisfiability formulas based on Latin squares that creates only satisfiable instances.    For example regular random kSAT is  UNSAT and more harder than uniform random SAT. SAT or UNSAT has been intensively study and prove in [29-40, 51-52].

 Our focused is not SAT or UNSAT rather to optimize the random SAT by minimizing the logical inconsistencies in the formula representation. The aim is to find a feasible solution for which the cost function is optimal.  We utilized HNN energy function that implements a minimization problem. The energy function of HNN always converges to a state that corresponds to a given configuration.

6.4 The unnecessary usage of mathematical term of limit in Equation (12). What does x represent? Pr is not well defined. What is r? probability? If yes, why does the value can be more than 1

We realised that eq. (12) served the purpose with equ. (3).  We, therefore, removed, eq. (12) in the manuscript.

We thank you very much for your valuable comment on our manuscript.

6.5 Equation (13) clearly does not resemble your equation in equation (9)-(11). Kindly propose a new equation that represents random-2SAT.

Thank you very much for your comment. This section has been revised accordingly.  Eq.(7)-(11) are the foundation of logic programming and satisfiability, we replaced them with random SAT.

6.6 “Comparatively, a high number of neurons (atoms) per number of clause would increase the probability of a number of neurons beings satisfied [18-19, 28].” According to my reading, none of these citations made the above claim.

Thank you very much for your comment. The result analysis in those papers proved the assertion.  However, you may refer to [ 23, 28, 48-50] for more detail. 

 

 

6.7 I am very confused with the reason why equation (13) should be expanded into Equation (14)-(16). What purpose does it serve?

 

Thank you very much for your comment. The procedure to generate equation (13) to (16) has been presented in the methodology section. The purpose is to calculate the synaptic connection of HNN-R2SAT which will later be stored as CAM.  The method adopted for calculating the synaptic connection is called WAN Abdullah method proposed in 1992 [17, 18,19, 28] which is equivalent to Hebbian learning [T1]. 

 

6.8  Acording to Equation (15), the minimum value of E_QRkSAT=1 but your claim defined otherwise E_QRkSAT=1

Thank you very much for your comment in our manuscript.  are the correct mapping that makes the entire R2SAT formula true.  The correct interpretation will be store as CAM of HNN which will be recalled when all the clauses are satisfied. is the value of minimum energy obtained after comparing cost function with HNN energy function.

6.9 Overall this part requires major improvement.

Thank you very much for your comment. We have revised this section accordingly.

SECTION 8:

RESPONSES

9.1 This section is lack of appropriate citations related to the ICA. Please add to increase the depth of this section.

 

Thank you very much for your comment. This section has been revised and we have cited [50, 55, 56, 57 and 58].

9.2 Is the ICA being used in the training or the testing phase of the network

Thank you very much for your comment. We concentrate our work only on the training phase of HNN.

9.3 Kindly justify the reasons behind selecting ICA to be compared with Election algorithm? Any special operator or features?

 

Thank you very much for your comment. Both ICA and EA are special socio-inspired metaheuristics algorithm based on a population of individuals. A generation has a parallel shape by operating at multiple points at once (size N-population) rather than a single iteration. They can lead to good solutions for computational problems with great complexity (textually, P-class).   Ability to recover from local conditions Due to the capacity to recover from local Optima, uncertainties in objectives can be better handled. They can manage several objectives with only a few modifications in algorithms. Their control parameters are presented in table 2 and 3.

 

9.4 The section on the ICA is poorly written and doesn’t reflect the modified version to be hybridized with HNN. Please revise thoroughly. 9.5 Please justify the significance of Equation 41.

Thank you very much for your comment. The has been revised accordingly.

eq.41 is assimilation which is significant in selecting the variable to replace.  Move the colonies toward their relevant imperialist,   the Movement of colonies toward their relevant imperialist in a randomly deviated direction.

9.6 Kindly improve the symbol of Pi for the line before Equation 42.

Thank you very much for your comment. The symbol has been replaced with appropriate better one.

9.7 What is equation 42? The usage or meaning of the formula is not written

Thank you very much for your comment.  Equ. (41) and (42) use for assimilation policy. Movement of colonies toward their relevant imperialist in a randomly deviated direction

9.8 Please explain Equation 49 and kindly mention the impact of this Equation to the results at the end of the simulations.

 

Thank you very much for your comment. Equ. (49) is significant to split the mention colonies respectively empires on the basis of their ownership.  After a while all the empires except the most powerful one will collapse and all the colonies will be under control of one empire.

9.9 You have mentioned in Stage 1 that random-kSAT is continuous and non-constrained? Should it be discrete and constrained?

 

Thank you very much for your comment. Boolean satisfiability (SATs) are generally discrete in nature.  They belong to the class of discrete combinatorial optimization problem. Therefore random-kSAT is discrete in nature and can further belong to the constraint optimization problem.

9.10 Please rewrite the Stage 1 until Stage 6 with proper mathematical formulations and correct problem. (You are dealing with discrete not continuous)

Thank you very much for your comment. Both random kSAT and HNN are discrete.  We have revised this section accordingly.

9.11 MAJOR CONCERN: Please remove unnecessary equations and kindly rearrange the important formulas of ICA in the correct order. Please rewrite your pseudocode and make sure all the important operators are included.

We thank you very much for your value comment on our manuscript:  unnecessary equation were removed and rearranged all important formulas of ICA in the correct accordingly. We have rewrite the ICA pseudocode and included important operators.

SECTION 10:

RESPONSES

10.1 This short section should NOT be a standalone section. It should be absorbed in perhaps the methodology section.

Thank you very much for your comment. This section has been combined with methodology accordingly.

SECTION 11:

RESPONSES

Line 597: extra comma

Thank you very much for your comment. The extra comma has been removed accordingly

SECTION 12:

RESPONSES

12.1 The discussion only discusses about the trend of the result. Minimal effort has been done by the author to discuss how their proposed method has the optimization property.

Thank you very much for your comment. This section has been revised accordingly.

12.2 The usage of BIC is not appropriate in this experimental setup because your dataset is limited to simulated dataset.

Thank you very much for your comment. It is used to assess the computational efficiency of a model. 

12.3 Again, the author seems to focus more on the analogy rather than justify the usage of the proposed method.

Thank you very much for your comment. This section has been revised accordingly.

12.4 EA should be compared with other state of the art algorithm such as GA, HS, ABC, ACO and etc.

Thank you very much for your comment. Comparing EA with swam based intelligence such as ABC, ACO AIS will our further research direction. 

12.5 How can the coalition process avoid the solution to be trapped in local minima? Please justify. How would it affect the time in particular?

Thank you very much for your comment.  coalition in the EA searching process, reduce additional iterations to reach a satisfying assignment. In fact, non-improving solution will be enhanced by a coalition strategy during the HNN learning phase. Reducing additional iterations will subsequently reduced additional computation time. This section has been revised accordingly. 

12.6 The last sentence in your result and discussion “ Hence, the individual created by EA achieved global minima swiftly compared to ICA and ES searchin methods.” is rather confusing and insignificant. Kindly revise.

Thank you very much for your comment. This statement has been revised accordingly

12.7 Please redraw the graphs especially Figure 6 and 8.

Thank you very much for your comment. all figures have been re-plotted accordingly

12.8 The trends should be described thoroughly in order to provide in depth results representation.

Thank you very much for your comment. we have tried to improve the trends presentation accordingly.

12.9 What is the differences between “brute-force” and “exhaustive search”? I noticed these terms are widely being used in your work to explain the same reasons. Please be consistent and use the correct term.

Thank you very much for your comment.  The exhaustive search is also known as brute force search, an approach in which there no better strategy than to explore the entire search space, hunting every possible candidate solution.

12.10 How does the MSE accumulation penalize BIC? Kindly elaborate further

Thank you very much for your comment.  This section has been elaborated accordingly.

12.11 What is optimally satisfied clauses? How does it relate to the Zm?

 

Thank you very much for your comment.  Optimally satisfied clause is the feasible solution achieved.  Zm is the ratio of the total minimum energy obtained and total number of runs. If Zm is close to 1, almost all solutions obtained in HNN achieved global optimal solution (feasible).

SECTION 13

RESPONSES

13.1 What do you mean by “more improved and robust heuristic” ? It should be something that can be measured quantitatively.

Thank you very much for your comment.  Measure the quality of the quantity of neural network is generally subject for further investigation.  However, [28] and [48] measure the Quality Solution of Logic Programming in Hopfield Neural Network.

13.2 This section contains too any grammatical error and bad sentence structures.

Thank you very much for your comment. This section has been revised accordingly.

Yours Sincerely,

Hamza Abubakar

 

Reviewer 4 Report

This paper utilizes a hybridized EA assimilated with the Hopfield neural network (HNN) in carrying out random logic program (HNN-R2SATEA). The efficiency of the proposed method was compared with the existing traditional exhaustive search (HNN-R2SATES) model and the recently introduced HNN-R2SATICA model. The results proved the robustness, effectiveness, and compatibility of the HNN-R2SATEA model. However, there is a lot of mistake in the sentences which need to go through the whole paper. Paper failed to highlight the issue of the current study to show the importance of the study. 

Author Response

Hamza Abubakar

School of Mathematical Sciences

07/03//2020

The Editor

Neural Computation and Applications for Sustainable Energy Systems

Dear editor

REVISION AND RESUBMISSION OF MANUSCRIPT

We would like to thank the reviewer for a careful and thorough reading of this manuscript and for the thoughtful comments and constructive suggestions, which help to improve the quality of this manuscript. Titled “MODIFIED ELECTION ALGORITHM IN ACCELERATING THE PERFORMANCE OF HOPFIELD NEURAL NETWORK FOR RANDOM kSATISFIABILITY.  We regret there were problems with the English. The paper has been carefully revised by a native English speaker to improve the grammar and readability.  We found that the general comments have helped us to improve the clarity and quality of the manuscript.

Round 2

Reviewer 3 Report

I have listed 14 questions (8.1 - 8.14) but none of them have been addressed by the authors.

The revised version of the manuscript is difficult to read. Authors should have used text highlighted using different color, and NOT by using the comment feature in the MS Word.

Author Response

Since we have made almost 90% changes to the the manuscript, I hope you can refer to our responses towards all of your comments as in the attachment. I think it would be better for us to put it that way without highlighting it with different colour.

Author Response File: Author Response.pdf

Reviewer 4 Report

Minor correction to the mentioned figure in the sentence and briefly explain the findings. Others, go through the paper for proofread

Author Response

Please refer to the attachment. 

Author Response File: Author Response.pdf

Round 3

Reviewer 3 Report

The authors have addressed all my previous queries. I appreciate the authors in providing the detailed point-to-point replies to my questions.

The manuscript, which has been re-written in most sections, is now of better quality. Its quality has improved tremendously compared to the previous submission. The writing has also improved - it is short and concise, and addresses the research topic/question.

I would recommend the manuscript to be accepted for publication.

Back to TopTop