Next Article in Journal
Engagement-Oriented Dynamic Difficulty Adjustment
Next Article in Special Issue
Application of Artificial Neural Network (ANN) in Predicting Box Compression Strength (BCS)
Previous Article in Journal
NuCap: A Numerically Aware Captioning Framework for Improved Numerical Reasoning
 
 
Article
Peer-Review Record

Comparative Evaluation of Feed-Forward Neural Networks for Predicting Uniaxial Compressive Strength of Seybaplaya Carbonate Rock Cores

Appl. Sci. 2025, 15(10), 5609; https://doi.org/10.3390/app15105609
by Jose W. Naal-Pech, Leonardo Palemón-Arcos and Youness El Hamzaoui *
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Appl. Sci. 2025, 15(10), 5609; https://doi.org/10.3390/app15105609
Submission received: 18 January 2025 / Revised: 11 May 2025 / Accepted: 13 May 2025 / Published: 17 May 2025
(This article belongs to the Special Issue Research and Applications of Artificial Neural Network)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. The paper is too lengthy, at some places too basic explanations or visualizations are given which may not be too necessary

2.More recent references in this domain should be considered and added.

3. Although flow of paper is good, more stress on the novelty should be given.

4. Figures such as 15 can be removed, it is not readable and relevant 

5. Conclusion is too lengthy

• What is the main question addressed by the research? This study proposed a  Bayesian-regularized neural network model to estimate uniaxial compressive strength from three key parameters—moisture content, interconnected porosity, and real density. It also implements a dedicated graphical user interface 15 (GUI) customized for Bayesian Regularization Backpropagation algorithm. The work highlights  the significance  of site-specific data in refining UCS estimations.The uniaxial compressive strength (UCS) of rock materials is among the most critical 38 parameters in geotechnical and civil engineering for civil engineering related structures. The accurate estimation of UCS will ensure structural stability and cost effectiveness.


• Do you consider the topic original or relevant to the field? Does it
address a specific gap in the field? Please also explain why this is/ is
not the case. Yes, the topic is original and addresses the research gap of estimating UCS based on important environmental parameters.The proposed model makes use of ANNs regularized using Bayesian technique unlike traditional regression approaches which assume a fixed functional form. ANNs can adapt their internal structure to to find nonlinear relationships among water content, porosity, density, and UCS.


• What does it add to the subject area compared with other published
material? The following works have addressed this problem in the year 2024: Khatti, J., Grover, K.S. Estimation of Intact Rock Uniaxial Compressive Strength Using Advanced Machine Learning. Transp. Infrastruct. Geotech. 11, 1989–2022 (2024). https://doi.org/10.1007/s40515-023-00357-4 Sabri, M.S., Jaiswal, A., Verma, A.K. et al. Advanced machine learning approaches for uniaxial compressive strength prediction of Indian rocks using petrographic properties. Multiscale and Multidiscip. Model. Exp. and Des. 7, 5265–5286 (2024). https://doi.org/10.1007/s41939-024-00513-4 Kochukrishnan, S., Krishnamurthy, P., & Kaliappan, N. (2024). Comprehensive study on the Python-based regression machine learning models for prediction of uniaxial compressive strength using multiple parameters in Charnockite rocks. Scientific Reports14(1), 7360.

This work explores a novel Bayesian regularized ANN model in order to solve the problem of estimating UCS. The use of deep learning approach will improve the results, The authors should compare their results by referring to such papers who have implemented this approach using ML algorithms and validate their proposed framework.


• What specific improvements should the authors consider regarding the
methodology? What further controls should be considered? The following other improvements are suggested: The paper focuses a lot on visualization of data and performance metrics. Some basic parts can be made concise as it is adding to the length of the manuscript. Results must be tabulated.  Focus must be laid on the proposed framework and novelty must be highlighted. The use of Bayesian framework in the manuscript must be emphasized upon and the framework should be shown with the help of a flowchart.  Resolution of figures must be improved. The flow of paper must be improved so that the contribution comes out clearly Comparison of results with existing work must be done in order to validate the proposed approach. Refer to ML techniques used in references mentioned.


• Are the conclusions consistent with the evidence and arguments
presented and do they address the main question posed? Please also
explain why this is/is not the case. The conclusions are consistent with presented arguments, however it should be made more concise and highlight the achievement of novelty rather than listing general advantages.


• Are the references appropriate? Yes they are appropriate but more recent references as mentioned should be included in the manuscript.


• Any additional comments on the tables and figures. The GUI is shown in Figure 15 which is blurred and must be changed. Figure 6 is blurred. I would suggest the inclusion of results in the form of tables as it makes them more interpretable. The manuscript lacks the use of tables.

Author Response

Ms. Ref. No. : applsci-3460218

 

Author’s Reply to Review Report (Reviewer 1)

Firstly, we thank the reviewer for their insightful feedback and valuable suggestions, which have guided a comprehensive revision of our manuscript. In response, we have restructured the paper, strengthened the methodological framework, and refined the content to address all concerns raised. All modifications are highlighted in yellow in the revised document

Summary of Major Revisions

  1. Title Change - The previous title, “Deep Learning Neural Network for Statistical Modeling of Compressive Strength in Seybaplaya Bank Rocks: A Multivariate Analysis Incorporating Water Content, Porosity, and Density Parameters,” has been replaced with “Comparative Evaluation of Feed-Forward Neural Networks for Predicting Uniaxial Compressive Strength of Seybaplaya Carbonate Rock Cores”.
  1. Complete experimental redo – All UCS, water‑content, porosity and density tests were repeated under the correct ASTM standards; the database was rebuilt (now 50 well‑documented core specimens instead of the earlier 134 mixed samples).
  2. Scope & title rewrite – The study now compares four feed‑forward ANN algorithms (RBF, BR, SCG, LM) rather than a single deep‑learning GUI; the new title reflects this shift.
  3. Methodological overhaul – Uniform preprocessing, 30‑run cross‑validation, Friedman + Benjamini‑Hochberg significance testing, and a partial‑derivatives sensitivity analysis were introduced.
  4. Manuscript restructuring & size reduction – Redundant theory, figures and text were removed or condensed; total length cut by ~15 %, with basic ASTM details collapsed into one paragraph.
  5. Figures & tables replaced – All low‑resolution images were redrawn; the paper now contains 19 high‑resolution figures, 8 concise tables, and a new Bayesian‑framework flowchart.
  6. Literature brought current - An extensive review of up-to-date literature was undertaken, encompassing numerous recent articles and, in particular, three 2024 benchmark studies (Khatti & Grover; Sabri et al.; Kochukrishnan et al.), which have been incorporated and juxtaposed in the revised version.
  7. Novelty highlighted – Intro, Methods and Conclusion now stress that this is the first Bayesian‑regularized ANN applied to Seybaplaya carbonate–clay rocks and that it outperforms alternatives while controlling overfitting.
  8. Supplementary materials prepared – Raw dataset, MATLAB training code, statistical scripts and sensitivity‐analysis outputs are supplied as supplementary files for full transparency.

 

Key Scientific Contributions of the Revised Study

  • Comprehensive multi-algorithm comparison – first side-by-side evaluation of RBF, BR, SCG and LM neural networks on Seybaplaya carbonate data.

 

  • Statistically validated performance ranking – 30 independent runs per model plus Friedman test with Benjamini-Hochberg correction provide objective accuracy rankings.

 

  • Sensitivity-driven feature-importance analysis – Dimopoulos partial-derivative method quantifies the influence of water content, interconnected porosity and real density on UCS; porosity dominates (54.4 %).

 

  • Practical guidelines for model selection – RBF delivers highest accuracy (median R² = 0.975; RMSE = 1.313 MPa); BR shows superior noise-robustness; SCG and LM converge faster but predict slightly less accurately.

 

  • Public data resource – curated database of 50 carbonate-core specimens with full UCS, porosity, density and water-content records to enable reproducibility and future benchmarking.

 

These contributions collectively identify the most effective ANN strategy for UCS prediction in heterogeneous carbonate formations and establish a transparent, statistically grounded framework for future machine-learning applications in rock mechanics.

 

 

Reviewer’s Comment:

The paper is too lengthy, with some overly basic explanations or visualizations that may not be necessary”.

Author’s Response:

We thank the reviewer for these insightful observations.

 

We have substantially streamlined and tightened the manuscript by:

  • Condensing introductory material.
  • Removing redundant
  • Shrinking the “Materials and Methods”

Non-essential visual elements have been removed to better highlight our core contributions.

 

Reviewer’s Comment:

“More recent references in this domain should be considered and added”.

Author’s Response:

Thank you for your kind comments, we agree with your comment.

We have incorporated an extensive review of up-to-date literature was undertaken, encompassing numerous recent articles and, in particular, three 2024 benchmark studies:

  • Khatti & Grover (2024) on advanced ML-based UCS estimation.
  • Sabri et al. (2024) on petrographic-driven strength prediction using hybrid ANNs.
  • Kochukrishnan et al. (2024) on Python-based regression for rock UCS.

 

These have been incorporated into Section 2, “Related Works,” specifically subsection 2.5, “Neural Networks for Predictive Modeling in Geomechanics”, and are cited as references [33]; [34] and [35].

 

 

 

Reviewer’s Comment:

Although the flow of paper is good, more stress on the novelty should be given”.

Author’s Response:

Thank you for your valuable remarks; we fully agree with your assessment.

We have updated the “Contributions of This Study” section (pp. 7-8) to clearly emphasize our novel elements: multi-algorithm comparison, statistical validation, sensitivity analysis, model-selection guidelines, and Seybaplaya Carbonate Rock dataset.

Reviewer’s Comment:

“Figures such as 15 can be removed, it is not readable and relevant”.

Author’s Response:

Thank you for your valuable remarks; we fully agree with your comment.

Figure 15 from the previous version has been removed, and all subsequent figure numbers have been updated accordingly.

Reviewer’s Comment:

The Conclusion is too lengthy”.

Author’s Response:

We appreciate your valuable remarks and fully concur with your observations.

The conclusions (now Section 7) have been condensed to focus succinctly on (a) the superior performance of the RBF network, (b) the robustness advantage of Bayesian regularization, and (c) the sensitivity ranking of input features. Broader commentary and suggestions for future work have been relocated to a brief “Outlook” subsection at the end.

 

 

 

 

We feel, these modifications aim to make the manuscript more concise, cohesive, and reader-friendly.

From our perspective, we are confident these revisions address Reviewer 1’s concerns, strengthening both the technical rigor and clarity of our manuscript.

We sincerely thank the reviewer for the thorough evaluation and helpful feedback. We believe these modifications significantly improve the manuscript’s quality and hope the revised version meets with your approval.

 

 

 

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

From line 499, the presented manuscript is well-written but not concise. It presents the individual stages of the research too extensively. The authors also present the research method, theoretical foundations, and results too extensively. In the end, the authors conducted a discussion summarizing the results of their work. The authors comprehensively summarize the entire work in the Conclusions chapter.However, the presented manuscript is too extensive as a whole. Now, it looks like a detailed research report, not an article summarizing the main aspects of the authors' work. I believe this Manuscript is unsuitable for publication in this form; it requires profound changes.

The Abstract is too long. It must be corrected.

This is an article-type manuscript, not a review one. The Introduction part from line 37 to line 492 is by far too long. It must be corrected. Some material from this section should be moved to the Materials and Methods Chapter. The maximum length of the Introduction part is three pages, preferably two pages.

The literature review is treated very briefly, and the literature presented is a bit outdated (8 of 34 items are new). It must be corrected.

Line 144 is [17], and next in line 499 are [26-28], but [18-25] are missing from the text.

Figure 6 is unreadable.

Line 616: I think it is Table 5, not 6, because in line 662 is another Table 6.

Line 666 – Please explain it in more detail: ”indicates 22nálisis22 predictive “

Figure 10 - I think this figure will be of better quality in the final version of this manuscript.

Figure 15 - I think this figure will be of better quality in the final version of this manuscript. Now, it is unreadable.

References:

-        Please provide DOI numbers,

 

-        All titles should be translated into English and marked, e.g., [in Spanish].

Author Response

Ms. Ref. No. : applsci-3460218

 

Author’s Reply to Review Report (Reviewer 2)

Firstly, we thank the reviewer for their insightful feedback and valuable suggestions, which have guided a comprehensive revision of our manuscript. In response, we have restructured the paper, strengthened the methodological framework, and refined the content to address all concerns raised. All modifications are highlighted in yellow in the revised document

Summary of Major Revisions

  1. Title Change - The previous title, “Deep Learning Neural Network for Statistical Modeling of Compressive Strength in Seybaplaya Bank Rocks: A Multivariate Analysis Incorporating Water Content, Porosity, and Density Parameters,” has been replaced with “Comparative Evaluation of Feed-Forward Neural Networks for Predicting Uniaxial Compressive Strength of Seybaplaya Carbonate Rock Cores”.
  1. Complete experimental redo – All UCS, water‑content, porosity and density tests were repeated under the correct ASTM standards; the database was rebuilt (now 50 well‑documented core specimens instead of the earlier 134 mixed samples).
  2. Scope & title rewrite – The study now compares four feed‑forward ANN algorithms (RBF, BR, SCG, LM) rather than a single deep‑learning GUI; the new title reflects this shift.
  3. Methodological overhaul – Uniform preprocessing, 30‑run cross‑validation, Friedman + Benjamini‑Hochberg significance testing, and a partial‑derivatives sensitivity analysis were introduced.
  4. Manuscript restructuring & size reduction – Redundant theory, figures and text were removed or condensed; total length cut by ~15 %, with basic ASTM details collapsed into one paragraph.
  5. Figures & tables replaced – All low‑resolution images were redrawn; the paper now contains 19 high‑resolution figures, 8 concise tables, and a new Bayesian‑framework flowchart.

 

  1. Literature brought current - An extensive review of up-to-date literature was undertaken, encompassing numerous recent articles and, in particular, three 2024 benchmark studies (Khatti & Grover; Sabri et al.; Kochukrishnan et al.), which have been incorporated and juxtaposed in the revised version.
  2. Novelty highlighted – Intro, Methods and Conclusion now stress that this is the first Bayesian‑regularized ANN applied to Seybaplaya carbonate–clay rocks and that it outperforms alternatives while controlling overfitting.
  3. Supplementary materials prepared – Raw dataset, MATLAB training code, statistical scripts and sensitivity‐analysis outputs are supplied as supplementary files for full transparency.

 

Key Scientific Contributions of the Revised Study

  • Comprehensive multi-algorithm comparison – first side-by-side evaluation of RBF, BR, SCG and LM neural networks on Seybaplaya carbonate data.

 

  • Statistically validated performance ranking – 30 independent runs per model plus Friedman test with Benjamini-Hochberg correction provide objective accuracy rankings.

 

  • Sensitivity-driven feature-importance analysis – Dimopoulos partial-derivative method quantifies the influence of water content, interconnected porosity and real density on UCS; porosity dominates (54.4 %).

 

  • Practical guidelines for model selection – RBF delivers highest accuracy (median R² = 0.975; RMSE = 1.313 MPa); BR shows superior noise-robustness; SCG and LM converge faster but predict slightly less accurately.

 

  • Public data resource – curated database of 50 carbonate-core specimens with full UCS, porosity, density and water-content records to enable reproducibility and future benchmarking.

 

These contributions collectively identify the most effective ANN strategy for UCS prediction in heterogeneous carbonate formations and establish a transparent, statistically grounded framework for future machine-learning applications in rock mechanics.

 

Reviewer’s Comment:

“From line 499, the presented manuscript is well-written but not concise. It presents the individual stages of the research too extensively... Now, it looks like a detailed research report, not an article summarizing the main aspects of the authors' work. I believe this manuscript is unsuitable for publication in this form; it requires profound changes.”

 

Author’s Response:

We thank the reviewer for these insightful observations.

 

We have trimmed and refocused each section to emphasize only core methods, results, and discussion:

  • Introduction reduced by moving extended theoretical background in a new “Materials and methods” (section 3).
  • Materials and methods now contains the detailed experimental protocols previously in the Introduction.
  • Results and Discussion have been merged into a single Results & discussion section (Section 6) to remove redundant summaries.
  • Conclusions condensed into four bullet points (Section 7).

 

The manuscript has been thoroughly condensed and restructured from the initial extensive research report format into a concise scientific article. We significantly reduced theoretical background, ensuring each section succinctly emphasizes key aspects of the study.

 

Reviewer’s Comment:

The Abstract is too long. It must be corrected”.

Author’s Response:

Thank you for your kind comments, we agree with your comment.

The Abstract has been extensively shortened and now succinctly captures the objectives, methods, key results, contributions and main conclusions.

 

 

Reviewer’s Comment:

“This is an article-type manuscript, not a review one. The Introduction part from line 37 to line 492 is by far too long. It must be corrected. Some material from this section should be moved to the Materials and Methods chapter. The maximum length of the Introduction part is three pages, preferably two pages”.

Author’s Response:

Thank you for your valuable remarks; we fully agree with your assessment.

We have relocated detailed descriptions of ASTM testing standards, sampling protocols, data-split procedures into section 3 Materials and methods. The Introduction now succinctly covers background, gap analysis, and study objectives to comply with the recommended length.

Reviewer’s Comment:

The literature review is treated very briefly, and the literature presented is a bit outdated (8 of 34 items are new). It must be corrected”.

Author’s Response:

We appreciate your valuable remarks and fully agree with your comment. In response, we have expanded the literature review on machine-learning–based UCS prediction in Section 2 (Related Works), integrating recent 2024 studies to enhance both currency and depth and thereby strengthen the manuscript’s relevance and scientific rigor.

Reviewer’s Comment:

Line 144 is [17], and next in line 499 are [26–28], but [18–25] are missing from the text”.

Author’s Response:

We appreciate your valuable remarks and fully concur with your observations.

All citation callouts have been corrected and are now properly formatted, and reference numbering has been validated throughout the manuscript.

 

 

 

Reviewer’s Comment:

“Figure 6 is unreadable”.

Author’s Response:

We appreciate your valuable remarks and fully concur with your observations. In response, all figures have been replaced with high-resolution images to enhance readability, and Figure 6—identified as unclear and unnecessary—has been removed from the revised version.

Reviewer’s Comment:

“Line 616: I think it is Table 5, not 6, because in line 662 is another Table 6”.

Author’s Response:

We gratefully acknowledge your insightful feedback and fully agree with your observations. To address this, we have meticulously revised all tables to enhance both consistency and clarity.

Reviewer’s Comment:

“Line 666 – Please explain it in more detail: ‘indicates 22nálisis22 predictive’.

Author’s Response:

We thank the reviewer for highlighting this issue. The sentence has been revised for clarity and now reads:

“The bracketed term denotes the normalized predictive contribution of each input variable, as determined by the partial-derivatives sensitivity method.”

 

Reviewer’s Comment:

“Figure 10 – I think this figure will be of better quality in the final version of this manuscript”

Author’s Response:

We thank the reviewer for highlighting this issue. The figure from the original manuscript has been replaced, and all figures have been updated to ensure legible axes, labels, and legends.

Reviewer’s Comment:

“Figure 15 – I think this figure will be of better quality in the final version of this manuscript. Now, it is unreadable”.

Author’s Response:

We thank the reviewer for drawing our attention to this issue. In parallel with Figure 10, we have replaced Figure 15 and revised all figures to ensure fully legible axes, labels, and legends.

 

Reviewer’s Comment:

“References: - Please provide DOI numbers.”

Author’s Response:

We appreciate the reviewer’s insightful comment. Accordingly, we have updated the References section to include DOI numbers for each citation where available.

Reviewer’s Comment:

“References: - All titles should be translated into English and marked, e.g., [in Spanish].”

Author’s Response:

We thank the reviewer for highlighting this point. Accordingly, we have revised the references to provide English translations for all non-English titles.

These revisions have enhanced the manuscript’s conciseness, cohesion, and readability. We are confident that they fully address Reviewer 2’s concerns and have strengthened both the technical rigor and clarity of our work. We sincerely thank the reviewer for the thorough evaluation and constructive feedback, which have substantially improved the quality of the manuscript. We trust that the revised version now meets with your approval.

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors


The paper Deep Learning Neural Network for Statistical Modeling of Compressive Strength in Seybaplaya Bank Rocks:... by J. Naal-Peech et al., presents a ANN model to assess UCS depending on three specific key parameters. The use of a selected sampling set shows that the multivariate analysis can provide useful information despite the limitations.    

The paper is well written and structured, allowing for a clear understanding of methods and procedures. The results are soundly supported by the discussions.

The interest of the results is quite limited, being the only novelty the combination of well-known parameters into a multivariate analysis, applying well known and general ANN methods using MATLAB libraries. The fact of applying the analysis to a local set of data, allows for obtaining some reasonable, but limited, correlated information.


The first criticism about the paper is to make a lengthy and over-detailed description of basic statistical analysis, relating again and again and again the same information.
The entire section 4, despite the appealing figures 3 and 4, is about repeated useless information.
Section 5.1 describes the data that was used, without the adequate description, in sections 3 and 4 before.
Indeed, the definition of the sampling set is done only at section 9.1 !

Section 9.8 is about details of the software that are useless in the discussion. The only interest is about documentation to repeat the analysis. Either remove from the paper, or move to an Annex.
The same is for section s 9.10    9.11    9.12



A second criticism is about consistency. As mentioned in section 9.5 (and Fig 11) the fit values of test  and training sets are non-consistent.
The Authors should soundly discuss the issue. The limited available data is not a reason for not to do it. Consider starting over with a different selection of the test-set. If for any random selection of test-sets the situation is the same (better test-set fitting than training-set fitting) then, either the algorithm is failing or the data set is very rare. There are also techniques to expand the data sets for training. In any case, it is on the side of the Authors to probe the methodology.

Note that the information in Figure 9 is also demanding a deeper discussion and a necessary refining of the algorithm. The gap of  UCS data in the range 80-90 MPa seems to cause a problem to connect the dependencies within the algorithm.



A third criticism is the reference to a 'pioneering GUI-driven methodology provides an adaptable template for geomechanical analyses in regions typified by...'. The GUI is mentioned in section 9.13 and it seems to be a basic interface to connect data and MATLAB libraries. It is a piece of software that is not documented, nor distributed, nor tested with other data sets. Therefore, it is a tool developed by Authors for their only personal use. That is right and it is a common practice, but not of any interest to mention in a scientific paper. And nothing like a 'pioneering GUI-driven methodology'. Note that the paper has not probed to develop any methodology for training ANNs based on a new-in-the-market tool.

*** The Reviewer acknowledges that the Authors provide access to data and code source for open discussion. In any case, if consider so important, make a dedicated Annex about this tool to show the capabilities ****


The Reviewer is convinced that Authors can make a thorough revision and rewriting of the paper content. Still, the consistency issue has to be soundly solved and discussed. If so, I will be glad to read a new version showing a proper work devoted to analyse the complex UCS data with smart methodologies.






OTHER COMMENTS to review - - - - -

The last paragraph in Section 1 (L96-ff) , addresses details that concern the summary, not the presentation. Consider moving or removing.



The use of bold formats to highlight words within the text is completely odd in scientific papers. Consider removing all of them.





L535
The first hidden layer employed a hyperbolic tangent sigmoid (tansig) activation function [32], [33], while the second and third hidden layers also utilized tansig.

Just write it easy:
The 3 hidden layers employed a hyperbolic tangent sigmoid (tansig) activation function [32], [33].



Figure 6:
Bad quality and repeating a basic module with a naive description. Consider simply removing the figure.



Table 4
This information should be exactly the same as Table 1.
One, make the point.
Two: what interest has to show again the stat distribution of the variables?  Repeated information. Consider removing it.

 

Section 9.3
... neural network was trained and evaluated in five independent runs, each time initializing the network with different random weights.

Explain this point better. Is the same data-set (training) used again and again ? How is it possible to improve if all weights are randomly initialized at each run?  Does it mean that one set of initial weights is better than others ? If so, that set has to be used. However, what happens if the training-set changes? Why not explore many more cases to find a 'best' initial weights-set ?






L636
...RMSE starts near approximately 70 and steadily decreases to the mid-40s by the final iteration,...

Please check: ...decreases to the mid-30s by the final iteration,...



L643
...The loss curves in the lower plot

Define loss curves.



L651
As summarized in Table 3, ...

Table 6 ? Check.



L1217
A three-hidden-layer network (10 -6 -1 neurons) with hyperbolic tangent activations proved effective...
Explain in the right section why a 'three-hidden-layer network (10 -6 -1 neurons)' is the choice for this model.

Besides all the provided details, many of them of little interest, the paper does not mention the time involved in the evaluation of the sets. That information is also of interest to have another reference about the cost and performance of the algorithms applied. Also about the cost of exploring alternative data sorting.

 

 

Comments for author File: Comments.pdf

Author Response

Ms. Ref. No. : applsci-3460218

 

Author’s Reply to Review Report (Reviewer 3)

Firstly, we thank the reviewer for their insightful feedback and valuable suggestions, which have guided a comprehensive revision of our manuscript. In response, we have restructured the paper, strengthened the methodological framework, and refined the content to address all concerns raised. All modifications are highlighted in yellow in the revised document

Summary of Major Revisions

  1. Title Change - The previous title, “Deep Learning Neural Network for Statistical Modeling of Compressive Strength in Seybaplaya Bank Rocks: A Multivariate Analysis Incorporating Water Content, Porosity, and Density Parameters,” has been replaced with “Comparative Evaluation of Feed-Forward Neural Networks for Predicting Uniaxial Compressive Strength of Seybaplaya Carbonate Rock Cores”.
  1. Complete experimental redo – All UCS, water‑content, porosity and density tests were repeated under the correct ASTM standards; the database was rebuilt (now 50 well‑documented core specimens instead of the earlier 134 mixed samples).
  2. Scope & title rewrite – The study now compares four feed‑forward ANN algorithms (RBF, BR, SCG, LM) rather than a single deep‑learning GUI; the new title reflects this shift.
  3. Methodological overhaul – Uniform preprocessing, 30‑run cross‑validation, Friedman + Benjamini‑Hochberg significance testing, and a partial‑derivatives sensitivity analysis were introduced.
  4. Manuscript restructuring & size reduction – Redundant theory, figures and text were removed or condensed; total length cut by ~15 %, with basic ASTM details collapsed into one paragraph.
  5. Figures & tables replaced – All low‑resolution images were redrawn; the paper now contains 19 high‑resolution figures, 8 concise tables, and a new Bayesian‑framework flowchart.

 

  1. Literature brought current - An extensive review of up-to-date literature was undertaken, encompassing numerous recent articles and, in particular, three 2024 benchmark studies (Khatti & Grover; Sabri et al.; Kochukrishnan et al.), which have been incorporated and juxtaposed in the revised version.
  2. Novelty highlighted – Intro, Methods and Conclusion now stress that this is the first Bayesian‑regularized ANN applied to Seybaplaya carbonate–clay rocks and that it outperforms alternatives while controlling overfitting.
  3. Supplementary materials prepared – Raw dataset, MATLAB training code, statistical scripts and sensitivity‐analysis outputs are supplied as supplementary files for full transparency.

 

Key Scientific Contributions of the Revised Study

  • Comprehensive multi-algorithm comparison – first side-by-side evaluation of RBF, BR, SCG and LM neural networks on Seybaplaya carbonate data.

 

  • Statistically validated performance ranking – 30 independent runs per model plus Friedman test with Benjamini-Hochberg correction provide objective accuracy rankings.

 

  • Sensitivity-driven feature-importance analysis – Dimopoulos partial-derivative method quantifies the influence of water content, interconnected porosity and real density on UCS; porosity dominates (54.4 %).

 

  • Practical guidelines for model selection – RBF delivers highest accuracy (median R² = 0.975; RMSE = 1.313 MPa); BR shows superior noise-robustness; SCG and LM converge faster but predict slightly less accurately.

 

  • Public data resource – curated database of 50 carbonate-core specimens with full UCS, porosity, density and water-content records to enable reproducibility and future benchmarking.

 

These contributions collectively identify the most effective ANN strategy for UCS prediction in heterogeneous carbonate formations and establish a transparent, statistically grounded framework for future machine-learning applications in rock mechanics.

 

Reviewer’s Comment:

“The interest of the results is quite limited, being the only novelty the combination of well-known parameters into a multivariate analysis, applying well known and general ANN methods using MATLAB libraries. The fact of applying the analysis to a local set of data allows for obtaining some reasonable, but limited, correlated information”.

Author’s Response:

We thank the reviewer for these insightful observations.

 

We have expanded our discussion of novelty in both the introduction (final paragraph) and the new “Contributions of This Study” section (2.6) to emphasize:

  • A comprenhensive multi-algorithm comparison across Radial Basis Function (RBF), Bayesian-Regularized (BR), Scaled Conjugate Gradient (SCG) and Levenberg-Marquardt (LM) networks.
  • A statistically validated ranking via Friedman testing with Benjamini-Hochberg correction.
  • A sensitivity-driven feature- important analysis quantifying each input’s effect on UCS.
  • Guidelines for model selection tailored to karst-influenced carbonate formations.
  • A curated, open dataset of 50 Seybaplaya core specimens for future benchmarking.

 

These additions make clear that our contribution extends well beyond a simple multivariate application, removing repeated passages to focus the narrative on novel findings.

 

 

 

 

 

 

 

 

Reviewer’s Comment:

The first criticism about the paper is to make a lengthy and over-detailed description of basic statistical analysis, relating again and again and again the same information. The entire section 4, despite the appealing Figures 3 and 4, is about repeated useless information. Section 5.1 describes the data that was used, without the adequate description in sections 3 and 4 before. Indeed, the definition of the sampling set is done only at section 9.1!”

 

Author’s Response:

Thank you for your kind comments, we agree with your comment. We have condensed and removed redundant material.

  • Condensed Section 4 by eliminating repetitive descriptions of descriptive statistics and basic tests, so that only novel analytical results remain.
  • Streamlined the methodological flow by relocating the detailed data‐description material (formerly in Sections 5.1 and 9.1) into Section 3 (“Materials and Methods”), where the sampling set and experimental procedures are now fully defined before any analysis.
  • Unified all statistical summaries into a single, concise subsection (“6.1 Descriptive Statistics”), removing duplicate explanations
  • Focused Section 5 exclusively on performance‐evaluation metrics and moved all algorithmic descriptions to Section 4, so that each section serves a unique purpose without overlap.

These revisions sharpen the narrative around our core contributions and ensure that each section delivers new information without reiteration.

 

Reviewer’s Comment:

“Definition of the sampling set is done only at section 9.1”.

Author’s Response:

We thank the reviewer for this perceptive observation. In response, we have moved the sampling set definition to section 3.1 (“Study Site and Sampling”), immediately after the site description, and removed its misplaced reiteration in section 5.1.

 

Reviewer’s Comment:

“Section 9.8 is about details of the software that are useless in the discussion. The only interest is about documentation to repeat the analysis. Either remove from the paper, or move to an Annex. The same is for sections 9.10, 9.11, 9.12”.

Author’s Response:

We thank the reviewer for this perceptive observation. In response, we have removed all software‐implementation detail from the main Discussion (formerly in Sections 9.8, 9.10, 9.11, and 9.12) and transferred it to the Supplementary Information. All in‐text references and section numbering have been updated accordingly. We have, also, removed detailed GUI descriptions from the Discussion.  The Supplementary Information now houses the full procedural descriptions and code references required for reproducibility, preserving the narrative flow of the main text.

 

Reviewer’s Comment:

Consistency issue: test-set fits better tan training-set. Must discuss and/or reselect test-set”.

Author’s Response:

We appreciate your valuable remarks and fully agree with your comment. We have re-examined all data splits over 100 random seeds, confirmed the pattern persists, and now discuss this phenomenon in section 6.2. We added, also, a brief investigation of data-augmented techniques, showing no significant improvement, and recommend exploring larger datasets in future work.

Reviewer’s Comment:

Figure 9 demands deeper discusión; gap at 80-90 MPa causing connectivity issues”.

Author’s Response:

We appreciate your valuable remarks and fully concur with your observations. We have expanded the analysis in section 6.2 to explain:

  • How the sparsity of UCS values in the 80-90 MPa range affects network training.
  • Why the RBF network’s localized basis functions mitigate this gap better than global-activation methods.

Reviewer’s Comment:

“Pioneering GUI-driven methodology is overstated”.

Author’s Response:

We appreciate your valuable remarks and fully concur with your comment.

All GUI‐related content has been removed from the manuscript.

 

Reviewer’s Comment:

Minor editorial comments:

  • Moved last paragraph of section 1 into conclusión. It’s done. Thank you.
  • Removed all bold formatting of in-text keywords. It’s done. Thank you.
  • Simplified activation-function description as suggested. It’s done. Thank you.
  • Deleted low-value figure and updated numbering throughout. It’s done. Thank you.
  • Merged redundant tables or moved duplcates. It’s done. Thank you.
  • Clarified training-run methodology to explain repeated random initializations and their role in robustness assessment. It’s done. Thank you
  • Corrected numerical tipos. It’s done. Thank you
  • Defined “los curves” in the figure. It’s done. Thank you.
  • Updated all table references to match new numbering. It’s done. Thank you
  • Explained choice of 3-layer (10-6-1) topology. It’s done. Thank you
  • Added algorithm run-time comparisons to adress cost-performance tarde-offs. It’s done. Thank you.

Note: All minor editorial comments have been addressed. While the revised manuscript differs substantially from the original, the study site location remains unchanged, with samples collected from the same Seybapalkaya Bank.

We have comprehensively addressed Reviewer 3’s comments: all recommended deletions have been executed or relocated outside the main text, and every issue raised has been thoroughly discussed in the revised manuscript. The text has undergone professional English editing to enhance clarity and precision. We trust these revisions satisfy the reviewer’s expectations and substantially improve both the quality and readability of the paper. We again thank Reviewer 3 for their insightful feedback, which has significantly strengthened our work, and remain available to implement any further suggestions.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The authors have worked hard to improve the Manuscript, and I thank them for considering my comments. Now, their work presents a higher level of advancement.

I believe that the work is suitable for publication.

The Abstract is too long.

Now, the Introduction Chapter is a bit too short, but Chapter “2. Related Works” is a bit too long.

Lines 410-415, 417-423 – Why are they bold?

Lines 474–488, 503-529, 540 -555, 564-576 – Why are they written in a different and larger font?

Author Response

Ms. Ref. No. : applsci-3460218

 

Response to Reviewer 2

We thank Reviewer 2 for recognizing the overall quality of our revised manuscript and for the constructive comments. Below, we address each point in detail.

Reviewer Comment #1

The Abstract is too long

Response and changes made:

We have condensed the Abstract to emphasize the study’s objectives, methodology, key findings, and conclusions, reducing its length from 246 to 182 words by Appl. Sci. guidelines. All statistical metrics and procedural details have been relocated to the Methods and Results sections (Lines 9–24, highlighted in yellow).

 

Reviewer Comment #2

Introduction Chapter is a bit too short, but Chapter ‘2. Related Works’ is a bit too long.

Response and changes made:

We have expanded the Introduction from 320 to 580 words by (a) articulating the research gap in UCS prediction for karst-influenced carbonates, (b) delineating the study’s specific contributions, and (c) providing a detailed manuscript outline. Concurrently, we have condensed Section 2 (“Related Works”) from 1,680 to 1,200 words by eliminating redundancy and merging analogous studies, while preserving the most critical benchmarks (Lines 29–120, 122-153, highlighted in yellow).

 

Reviewer Comment #3

Lines 410–415, 417–423 – Why are they bold?

Response and changes made:

All unintended bold formatting has been removed, and those lines now use the same font and weight as the surrounding text. During conversion from the Word file to PDF, a style inconsistency inadvertently reintroduced bold formatting; this error has since been corrected, and we have verified consistent use of regular font weight throughout the manuscript (Lines 358-371, 385-412, 423-439, highlighted in yellow).

 

Reviewer Comment #4

Lines 474–488, 503–529, 540–555, 564–576 – Why are they written in a different and larger Font

Response and changes made:

We discovered a style inconsistency in our template that caused those sections to appear at 14 pt rather than in the journal’s required format. We have now standardized all body text to comply fully with the journal’s formatting guidelines. During conversion to PDF, a similar style glitch inadvertently reintroduced bold formatting; this error has since been corrected, and we have verified consistent use of regular font weight throughout the manuscript (Lines 358-371, 385-412, 423-439, highlighted in yellow).

 

We trust that these revisions address Reviewer 2’s concerns. Thank you again for your thoughtful feedback.

 

 

 

 

 

 

 

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The Authors have made a thorough revision considering the comments provided by the Reviewers. The result is an improved draft that presents the analysis of a set of geological samples, to evaluate the performance of different NNs to predict the UCS value in respect to the key sample parameters.

The Authors did a thorough and critical revision to present a sound scientific paper and the Reviewer congratulates the Authors for the effort and, more important, the result. The paper is largely improved and can be now discussed for considering its publication.

Still, some points must be addressed to consider the publication. There is no major criticism in the content of the paper, and it is mostly about missing information and/or the presentation of the results. Therefore, the Reviewer is confident that Authors can complete the revision successfully and present a version for publication.

See the attachment.

Comments for author File: Comments.pdf

Author Response

Ms. Ref. No. : applsci-3460218

 

Response to Reviewer 3

We thank Reviewer 3 for the positive evaluation and for the many detailed suggestions. Below we address each point in turn.

 

Reviewer Comment

Response and Changes Made

L47: “…ANFIS to PSO-tuned networks” – Define ANFIS and PSO.

We have expanded the acronym on first use (L47): “adaptive neuro-fuzzy inference system (ANFIS)” and “particle swarm optimization (PSO), a population-based stochastic search algorithm.”

L115: “…apply both MLR and ANN (LM-trained MLP)” – Define MLR, MLP, and LM.

We now write: “multiple linear regression (MLR), multilayer perceptron (MLP) neural network trained via the Levenberg–Marquardt (LM) algorithm.”

L124 & L129: Ensure “LM-ANN” is only used after LM has been defined.

We removed all instances of “LM-ANN” prior to line 115, and now use “LM-trained MLP” consistently after the acronym definitions.

L120: “…incorporating Slake durability indices (Id₂, Id₄)” – Remove “(Id₂, Id₄)”.

The parentheses have been removed so that only the terms “Slake durability indices” remain (L120).

L137: “…and VAF of 97%…” – Define VAF.

We now write: “variance accounted for (VAF), a percentage measure of explained variance.” (L137).

L151: “…ultrasonic pulse velocity (Vp),” – Remove “(Vp)”; abbreviation is re-used later.

The “(Vp)” has been omitted in line 151; “ultrasonic pulse velocity” remains without abbreviation here.

L192: Remove parameter abbreviations “(A), (M), (D), (Vp), (E)” when they are not otherwise used.

We now refer simply to “specimen area, mass, density, P-wave velocity, and Young’s modulus” without parenthetical symbols.

L198: Simplify ELM configuration details.

We replaced the detailed hyperparameter list with: “The extreme learning machine (ELM) model used 10 hidden neurons and a population size of 20, achieving the best overall performance.”

L219: “(CAM ≈ 0.99)” – Define CAM.

We now write: “coefficient of agreement (CAM ≈ 0.99), a metric of predictive homogeneity.”

L226: Define scikit-learn and OLS; simplify package-specific detail.

We now state: “models implemented using standard Python libraries (e.g., scikit-learn) with linear regression and step‐wise selection routines.” Detailed function names have been removed.

L274: Complete “Delegate of Mines in Puebla, Puebla (Mexico)”.

Added “(Mexico)” after each geographic mention (L274–L275).

L279: Complete “Seybaplaya (Campeche, Mexico)”.

Updated to “the town of Seybaplaya (Campeche, Mexico)” (L279).

L280: Comment or remove “(Qpt Cq–Cz)”.

Removed the abbreviation “(Qpt Cq–Cz)” and added a brief descriptor of the outcrop sequence.

Intruder text “análisi” in multiple lines (L280, L294, L323, L372, L383) – Remove and correct.

All placeholder text “analisi” and similar artifacts have been removed; sentences have been proofread and completed.

L283: Complete geographic range “between Haltunchén, Villa Madero (Mexico)”.

Added “(Mexico)” to the list of localities (L283).

L284: Provide source for “Campeche Geological-Mining Map e15-3, 1:250 000”.

Added citation “Servicio Geológico Mexicano (2021)” and included full map reference.

Figure 1: Improve resolution and add source.

Replaced with a higher-resolution map; caption now reads “Geological context of the Seybaplaya Formation (Servicio Geológico Mexicano, 2021)”.

L303: After “following equation”, specify distribution/use.

Added “assuming maximum variance for unknown populations; see Equation (1).”

Equation (1): Check Z subindex.

Corrected to Zα/2Z_{α/2} for two-tailed confidence intervals.

L325: “flatness tolerance of 0.00254 cm” – Use original ASTM units.

Replaced “0.00254 cm” with “0.001 in (25 µm)” and retained the ASTM reference.

L340: Simplify machine description.

Changed to “a universal testing machine” without serial number.

L342–ff: Remove step-by-step machine instructions.

The itemized operator instructions have been removed; only the standard test method remains.

Figure 4: Low quality, remove.

Figure 4 has been fixed.

L360 & Figure 6: Clarify moisture context.

Revised text to describe “lab-measured moisture content shortly after core collection” and updated Figure 6 caption accordingly.

L384: “Nonetheless” syntax; clarify true density moisture state.

Changed to “We measured true density on oven-dried specimens via ASTM D854-23.”

Figure 7b: Clarify volume measurement.

Added description “volume determined via fluid displacement in a pycnometer.”

L394 & Figure 8A: Explain absorbed fluid measurement.

Added procedure: “pore volume measured by vacuum saturation and weight difference” and updated caption.

Figure 8A: Add methodological detail.

Caption now reads “Vacuum saturation apparatus used to determine pore volume.”

L399: Report measurement deviations.

Added “measurement uncertainty ±3 % for moisture, ±1 % for density, and ±2 % for porosity based on repeatability tests.”

Table 2: Add row separators.

Inserted horizontal lines between rows for clarity.

Section 6.1: Swap order of Table 2 and Figure 9; comment on binning.

Now present Table 2 first, then Figure 9. Added note on histogram bin width chosen based on measurement precision.

L471, L496: Replace “However”/“Nevertheless” with “Figure ...”

Replaced transitional words with direct references: “Figure 9 shows…” and “Figure 10 presents…”.

Figure 9 caption: Improve.

Changed to “Distributions of measured properties for 50 Seybaplaya core samples.”

Figure 10 caption: Improve.

Changed to “Pairwise scatterplots of sample properties with marginal histograms.”

Remove repeated commentary on diagonal histograms (L502–ff) and on Figure 11 (L538–555).

Condensed descriptions to avoid redundancy; kept only unique insights.

Figure 12: Normalize data and update.

Re-plotted with all variables normalized to their maximum range; caption updated accordingly.

L620: Clarify data-split notation.

Preceding paragraph now defines “70 % training, 15 % validation, and 15 % test sets.”

L641: Replace “Nonetheless” with “Figure 17 shows…”

Updated transition.

Figure 17: Adjust scale/remove extreme outliers.

Y-axis limits adjusted to show main distribution; legend notes that 3 outliers beyond plot limits were excluded.

L676, L696: Explain “30 independent runs” and dataset splits.

Added: “Each run randomly re-splits the full dataset into the prescribed subsets to assess variability due to sampling; summary statistics are reported over these 30 realizations.”

 

We have comprehensively addressed Reviewer 3’s comments: all recommended deletions have been executed or relocated outside the main text, and every issue raised has been thoroughly discussed in the revised manuscript. The text has undergone professional English editing to enhance clarity and precision. We trust these revisions satisfy the reviewer’s expectations and substantially improve both the quality and readability of the paper. We again thank Reviewer 3 for their insightful feedback, which has significantly strengthened our work, and remain available to implement any further suggestions.

 

Author Response File: Author Response.pdf

Back to TopTop