Reducing Computational Time in Pixel-Based Path Planning for GMA-DED by Using Multi-Armed Bandit Reinforcement Learning Algorithm
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors present an interesting and novel approach to path planning, aiming to reduce computational time. While the method appears scientifically sound, the article requires revision in terms of organisation and content. Specifically, important introductory information is missing, while excessive detail is given to less relevant aspects, making the novelty, proposal, and overall context unclear, especially for a reader from a different field of science. Below is a detailed report:
1) Try to improve the grammar and clarity of the sentences checking them paragraph by paragraph
2) GMA-DED or WAAM is a well-known topic, but the number of citations and the depth of discussion on it seem limited. I suggest expanding the discussion on the challenges of this technology and the role of AI, not just in path planning but more broadly. Moreover discuss the role of overlapping distance in path planning and cite the work of Ding [3] and include the MAT method to path planning for complex structures [4].
[1] WAAM challanges/ defects : 10.1016/j.jmapro.2018.08.001
[2] Role of AI in improving WAAM: 10.1007/s10845-023-02085-5
[3] Overlapping model: 10.1016/j.rcim.2014.08.008
[4] Medial axis transformation: 10.1016/j.jclepro.2016.06.036
3) The details about the Basic-Pixel method should be moved to Section 3, as it serves as the foundation for the proposed advancements. Therefore, please revise the last part of the introduction by shifting the detailed explanation of the initial algorithm to Section 3, where it can be presented alongside the RL-based computational optimization. In the introduction, it would be more effective to discuss the computational limitations of slicing algorithms and planning in general, then introduce RL as a method to address these challenges.
4) Additionally, in WAAM, several studies have applied RL for various tasks, such as process optimization in GMAW and/or feedback control WAAM. Please refer to these works to better illustrate the potential applications of RL in this context. You can detail these advancements in the section 2, that may be presented with a different name as well.
5) In general, I feel that the introduction and Section 2 should be expanded, as something is missing, and too much focus is placed on only one aspect of your work. Since this section should provide the broader literature background, I strongly recommend revising it according to the suggestions and incorporating more references to strengthen the discussion.
6) When describing the Enhanced-Pixel strategy, consider adding an algorithm to better guide readers through its development and implementation. Without this, the explanation may be confusing and not sufficiently helpful to the reader.
7) Line 206 "choosing the most promising one at each iteration" based o what?
8) Lines 210-2134 "The algorithm then extracts the shortest trajectory distance per iteration from the matrix, the "Extract the best value from matrix" step, and stores it into another matrix, represented by "Store in a best value matrix". It is not cleat how extract the bvest value. What value is considered?
9) The introduction of Section 3 needs to be rewritten to solve similar problems, as it is not currently clear. Additionally, include algorithm tables and structure the section more effectively to enhance the clarity and overall quality of the paper.
10) Please improve Figure 3
12) Line 424-434 it is not clear.
13) The results in Figure 6 can not presented in this manner. Split in different figures and discuss in detail what we can observe from each figure.
14) At the end of the paper to only question is related to the starting point. Is it always random for all the strategies? Please specify.
15) Rewrite the conclusion to emphasise the novelty of the work without deviating into unrelated topics. For instance, consider reducing or eliminating lines 706–710 to maintain focus.
16) I suggest changing the title of the article. For example, "Reducing Computation Time in Pixel-Based Path Planning for GMA-DED Using Multi-Armed Bandit" better reflects all aspects of the paper. It explicitly mentions the use of a Multi-Armed Bandit, the simplest reinforcement learning algorithm, avoiding any assumption that a more complex approach is involved. Additionally, it clarifies that the problem is applied specifically to pixel-based path planning, rather than standard zig-zag or other strategies. Lastly, I see no clear advantage in referring to this method as an "Advanced Pixel Strategy" instead of an "Enhanced Pixel Strategy."
Author Response
Comments:
- Try to improve the grammar and clarity of the sentences checking them paragraph by paragraph.
Authors: The grammar and clarity of the sentences were reviewed and improved individually, ensuring better readability and coherence throughout the manuscript.
- GMA-DED or WAAM is a well-known topic, but the number of citations and the depth of discussion on it seem limited. I suggest expanding the discussion on the challenges of this technology and the role of AI, not just in path planning but more broadly. Moreover discuss the role of overlapping distance in path planning and cite the work of Ding [3] and include the MAT method to path planning for complex structures [4].
Authors: We appreciate the reviewer's suggestion regarding expanding the discussion on the challenges of GMA-DED/WAAM and a broader role of AI. However, we believe that a detailed discussion on these topics would extend beyond the primary focus of this work and significantly increase the length of the paper. The current version of the manuscript is already 27 pages long. A lengthy paper with a broad emphasis might untidy the interest of potential readers. However, due to the pertinence of the reviewer's remarks, we have incorporated additional information as recommended, particularly regarding the role of overlapping distance in path planning, as well as references to the works of Ding [3] and the MAT method for complex structures [4]. By doing that, these additions will enhance the discussion without making the paper overly lengthy or diverging from its main objectives. The revised discussion can be found as follows, but they are highlighted also in the revised version of the manuscript.
"…Besides the trajectory planning, it is essential to consider the overlapping distance to ensure high-quality results, as discussed by several authors, including Ding et al. [4] and Hu et al. [5] (see these references for more details on these approaches)." (line 49-51)
"According to the same author, additional trajectory planning strategies were developed to handle complex geometries, such as the MAT [7], A-MAT [8], and the Water Poured [9] methods. Alternatively, the literature suggests that employing a non-conventional space-filling strategy could also be a viable approach to addressing this challenge" (line 56-60)
3) The details about the Basic-Pixel method should be moved to Section 3, as it serves as the foundation for the proposed advancements. Therefore, please revise the last part of the introduction by shifting the detailed explanation of the initial algorithm to Section 3, where it can be presented alongside the RL-based computational optimisation. In the introduction, it would be more effective to discuss the computational limitations of slicing algorithms and planning in general, then introduce RL as a method to address these challenges.
Authors: Please allow the authors not to fully agree with this suggestion, as the focus of the paper is specifically the improvement of the Pixel-based strategy rather than addressing the computational limitations of the current slicing algorithms and path planning strategies (in general). Our counterpoint aligns with the reviewer's suggestion (comment #16) regarding the title change to "Reducing Computation Time in Pixel-Based Path Planning for GMA-DED Using Multi-Armed Bandit", which clarifies that the study is centred only on Pixel-based path planning rather than on conventional zigzag or other strategies. Given this clarification, we believe it is not advisable to restructure the introduction as suggested.
4) Additionally, in WAAM, several studies have applied RL for various tasks, such as process optimisation in GMAW and/or feedback control WAAM. Please refer to these works to better illustrate the potential applications of RL in this context. You can detail these advancements in the section 2, that may be presented with a different name as well.
Authors: Thanks for the pertinent suggestion. We have included a paragraph (lines 163-184) to discuss the application of reinforcement learning in WAAM. Please refer also to the below text for the new text.
"Reinforcement learning (RL) has emerged as a powerful tool for enhancing system adaptability, process control, and path planning also in wire arc additive manufacturing (WAAM). Wang et al. [17] highlight the importance of monitoring and control in WAAM, particularly in maintaining dimensional accuracy and mitigating defects. While their work focuses on regression networks and Active Disturbance Rejection Control (ADRC) for weld shape optimisation, it underscores the need for advanced AI-driven solutions, such as RL, to further enhance WAAM processes. Beyond system adaptability, RL has also been applied to process control, as demonstrated by Mattera et al. [18], who explore the use of RL to develop intelligent control systems for industrial manufacturing, including WAAM. Their work showcases the potential of RL-based controllers, such as the Deep Deterministic Policy Gradient method, to optimise the welding process while bridging the gap between simulation and real-world applications.
Additionally, RL has been successfully integrated into path-planning strategies. Petrik and Bambach [19] introduce RLTube, an RL-based algorithm that enhances deposition path planning for thin-walled bent tubes, offering greater flexibility and efficiency compared to rigid mathematical approaches. Similarly, in another work [20], the same authors present RLPlanner, which automates path planning for thin-walled structures by combining RL with Sequential Least Squares Programming, ensuring better adaptability to geometric variations. Collectively, these studies demonstrate the growing role of RL in improving WAAM, from deposition path optimisation to real-time process control, paving the way for more intelligent and adaptive manufacturing systems.
In this way, to tackle the time-consuming…"
5) In general, I feel that the introduction and Section 2 should be expanded, as something is missing, and too much focus is placed on only one aspect of your work. Since this section should provide the broader literature background, I strongly recommend revising it according to the suggestions and incorporating more references to strengthen the discussion.
Authors: This concern was addressed in the response to a previous reviewer's comment (#2). As seen, the authors expanded the discussion on the challenges of GMA-DED/WAAM and a broader role of AI. In addition, we incorporated additional information regarding the role of overlapping distance in path planning, as well as references to the works of Ding [3] and the MAT method for complex structures [4].
6) When describing the Enhanced-Pixel strategy, consider adding an algorithm to better guide readers through its development and implementation. Without this, the explanation may be confusing and not sufficiently helpful to the reader.
Authors: The authors believe that algorithm tables may not be the best way to help readers with little or no expertise in this field understand the content. Therefore, we prefer to use flowcharts and accompanying explanations (in the previous Pixel-related already published papers, we always used flowcharts).
7) Line 206 "choosing the most promising one at each iteration" based o what?
Authors: It is based on the shortest trajectory distance. To improve this understanding, we provided the following explanation (lines 232-233):
"…choosing the most promising one at each iteration based on the minimum trajectory distance."
8) Lines 210-2134 "The algorithm then extracts the shortest trajectory distance per iteration from the matrix, the "Extract the best value from matrix" step, and stores it into another matrix, represented by "Store in a best value matrix". It is not cleat how extract the best value. What value is considered?
Authors: We have 10 trajectories, each with a corresponding trajectory distance stored in an initial matrix. In the 'Extract the best value from matrix' step, the trajectory with the minimum distance is selected from this matrix and transferred to the 'best value matrix,' as demonstrated in the 'Store in a best value matrix' step. This final matrix holds the trajectory with the minimum distance value. We made a slight change in the text (line 236):
"Concomitantly, the 2-opt algorithm is applied to optimise each of the trajectories, represented by the "[Heuristic of trajectory planning symbol] + 2-opt" step. In total, ten trajectories are generated by looping and stored in a matrix per iteration (10 loops x iteration), represented by "Store values in a matrix" step. The algorithm then extracts the shortest trajectory distance per iteration from the matrix, the "Extract the best value from matrix" step, and stores it into another matrix, represented by "Store in a best value matrix". This matrix serves as a repository for all the good values found in each iteration. At this point, the algorithm either restarts for a new iteration, repeats the process, or proceeds to the end. "
9) The introduction of Section 3 needs to be rewritten to solve similar problems, as it is not currently clear. Additionally, include algorithm tables and structure the section more effectively to enhance the clarity and overall quality of the paper.
Authors: We believe that algorithm tables might not be the best way to help readers with little or no expertise in this field understand the content. Algorithm tables are more readable for experts, and with detailed aspects that are out of this current paper scope. Therefore, the authors still prefer to use flowcharts and accompanying explanations instead (in previous Pixel papers, we always used flowcharts). As for the rewriting of Section 3, we reviewed it and are unsure what specifically needs to be rewritten. We feel that the section provides initial and basic information, building upon what was introduced previously in the Introduction section. We always encourage readers to refer to the referred papers for further details.
10) Please improve Figure 3
Authors: Figure 3 was improved. Thanks for the remark.
12) Line 424-434 it is not clear.
Authors: These lines show a comparison between two algorithms, Enhanced-Pixel and Advanced-Pixel, in terms of their iteration processes in trajectory planning. The algorithms combine different axis ordering and trajectory planning heuristics (AO-HTP) to generate trajectories. The number of combinations that the algorithms go through per iteration differs between the two methods. In Enhanced-Pixel, for each iteration, 10 different sets of AO-HTP combinations are evaluated (Figure 2(a). So, it loops through 10 combinations per iteration. In contrast, Advanced-Pixel only evaluates one combination of AO-HTP per iteration (Figure 2(b) shows that). To make a fair comparison between the two methods, the authors align the iteration criteria. That is, since 10 combinations are evaluated per iteration in Enhanced-Pixel, they consider each of those 10 sets as one iteration for comparison purposes. In other words, the 10 loops in Enhanced-Pixel (10 combinations per iteration) are equivalent to a single iteration in Advanced-Pixel. To improve the understanding of this paragraph to all potential readers, we rewrote these lines (lines 456-468) as follows:
"It is important to mention that the number of iterations through the AO-HTP combinations (Axis Ordering and Heuristics of Trajectory Planning) differs between the Enhanced-Pixel and Advanced-Pixel algorithms. In the Enhanced-Pixel method, each iteration evaluates 10 sets of combined AO-HTP (Figure 2(a)), whereas the Advanced-Pixel method evaluates only one combination per iteration (Figure 2(b)). To enable a fair comparison between the two strategies, each of the 10 trajectory generations produced by a single AO-HTP combination in Enhanced-Pixel was treated as one iteration. This means that, for comparison purposes, 10 iterations of Enhanced-Pixel amounted to 1 iteration of Advanced-Pixel. As a result, both algorithms turned comparable to the same iteration basis; Enhanced-Pixel uses 50 iterations with 10 combinations per iteration, while Advanced-Pixel uses 500 individual iterations, making the total number of trajectory evaluations equivalent for both methods."
13) The results in Figure 6 can not presented in this manner. Split in different figures and discuss in detail what we can observe from each figure.
Authors: We agree with this suggestion of splitting the figure into two: the first figure (no Figure 6) focuses on convergence analysis, and the second one (Figure 7) focuses on cumulative regret analysis. However, the explanation of each curve is already provided in the convergence analysis (lines 524-540). Regarding the cumulative regret analysis, the behaviour is similar for all three parts, so a general description was provided (lines 553-561). Therefore, only a reorganisation of the paper was made to enhance clarity. The authors hope to have satisfied the reviewer's remark.
14) At the end of the paper to only question is related to the starting point. Is it always random for all the strategies? Please specify.
Authors: Yes, for each layer, the trajectory always starts from a randomly selected point. To improve clarity on this concern, the authors revised this line (706-707) as follows:
"The experimental results are shown in Figure 12, where continuous trajectories were generated for the odd and even layers, as seen in Figure 12(a) and (b), with each layer starting from a randomly selected point"
15) Rewrite the conclusion to emphasise the novelty of the work without deviating into unrelated topics. For instance, consider reducing or eliminating lines 706–710 to maintain focus.
Authors: The authors agree with the reviewer's suggestion to maintain focus, so we have removed lines 706-710. However, we believe the remaining conclusions are comprehensive, yet not extensive, based on the results, and correlated to the work objectives. Please see the revised conclusion below without the lines pointed out by the reviewer. We hope they are to the reviewer's satisfaction now.
"The concept of the MAB problem (an AI reinforcement tool) in the algorithm was applied to a non-conventional space-filling Enhanced-Pixel strategy to optimise the trajectory, as stated in the objective of this work. In summary, it was noted that:
- a) The algorithm of the Advanced-Pixel strategy processes faster the optimised solution than its predecessor (Enhanced-Pixel), due to fewer iterations;
- b) Reducing iterations has no negative impact on trajectory planning performance using the Reinforcement Learning approach. In fact, the algorithm performance gain shows that Advanced-Pixel converges, in most cases, to the shortest trajectory with shorter printing times. However, it is worth noting that the solution applied in Advanced-Pixel is based on probabilistic concepts, and one cannot expect the advanced version to beat the predecessor Pixel version in 100% of cases;
- c) The sensibility of the algorithm performance comparison increases for larger printable parts (higher number of nodes);
- d) Therefore, the implementation of Reinforcement Learning through the MAB problem succeeded well in "grading up" the Pixel family of space-filling trajectory planners.
As future work, there is potential for conducting additional studies involving a larger variety of geometries, as well as exploring the use of more policy tools for working with MAB. This opens the possibility of incorporating other reinforcement learning algorithms. Additionally, the research group plans to investigate the use of clustering techniques to further enhance the performance of the Advanced-Pixel strategy."
16) I suggest changing the title of the article. For example, "Reducing Computation Time in Pixel-Based Path Planning for GMA-DED Using Multi-Armed Bandit" better reflects all aspects of the paper. It explicitly mentions the use of a Multi-Armed Bandit, the simplest reinforcement learning algorithm, avoiding any assumption that a more complex approach is involved. Additionally, it clarifies that the problem is applied specifically to pixel-based path planning, rather than standard zig-zag or other strategies. Lastly, I see no clear advantage in referring to this method as an "Advanced Pixel Strategy" instead of an "Enhanced Pixel Strategy."
Authors: The authors agree with your suggestion regarding retitling the submission. However, we made a slight modification to the suggested title, as follows:
"Reducing Computational Time in Pixel-Based Path Planning for GMA-DED by Using Multi-Armed Bandit Reinforcement Learning Algorithm" (title)
As for the use of 'Advanced Pixel Strategy' instead of 'Enhanced Pixel Strategy', this is primarily a matter of nomenclature only. We named it 'Advanced', as an upgraded version of the 'Enhanced' one. The authors claim that 'Advanced Pixel Strategy' reflects the most intelligent way of generating the Pixel trajectory approach up to now, due to reduced computation times.
Reviewer 2 Report
Comments and Suggestions for AuthorsThis paper introduces a reinforcement learning-based path planning strategy to optimize computer processing time in GMA-DED additive manufacturing. While innovative, several aspects need improvement:
- The Multi-Armed Bandit’s hyperparameter choices lack detailed justification. Referencing prior studies could bolster the rationale and credibility.
- The experiment compares only two strategies, neglecting traditional approaches like Zigzag and Contour. Including these would provide a more robust validation of the algorithm’s benefits.
- Certain figure captions and annotations, particularly for multi-layered parts, are unclear. Improved sizing and precise labeling would enhance comprehension.
- Relying on just three-part shapes limits the evaluation of the algorithm’s adaptability. Testing a wider variety of geometries would better demonstrate its stability and versatility.
- The conclusion highlights advantages but omits future work, such as enhancing efficiency, testing in complex scenarios, or integrating additional AI methods.
Author Response
Reviewer #2
- The Multi-Armed Bandit's hyperparameter choices lack detailed justification. Referencing prior studies could bolster the rationale and credibility.
Authors: We appreciate the reviewer's comment regarding the justification of hyperparameter choices for the Multi-Armed Bandit (MAB) approach. In our study, hyperparameters for both the ε-greedy and UCB policies were selected empirically to cover a range of exploration-exploitation trade-offs. For the ε-greedy policy, we considered three values: ε = 0.3 (favouring more exploitation), ε = 0.5 (providing a balance between exploration and exploitation), and ε = 1.0 with decay (starting with full exploration and gradually shifting towards exploitation).
These choices align with standard practice, where a higher ε encourages exploration, while a lower ε favors exploitation. The decay strategy further allows us to observe the transition from an exploratory to an exploitative behaviour over time.
In line with the reviewer's remark, the authors propose the following to the text (lines 433-445):
"a) ε-greedy policy tool: with ε values arbitrarily defined as 0.3 (favouring more exploitation) , 0.5 (providing a balance between exploration and exploitation), and 1.0. However, a logical artifice was introduced in this policy tool algorithm to improve the tool efficiency even further. A decay rate was applied to the hyperparameter ε, so the algorithm decreases the ε hyperparameter over time at a certain rate. This artifice allows for more exploration at the beginning (with larger ε values) and more exploitation at the end (with smaller ε values). As proof of concept, a decay rate of 1% was defined. But this was applied to the case where ε was defined as 1.0 (a value that would not be reasonable if kept constant). Therefore, the ε value was decreased in steps of 0.01 in each iteration in this work. Although chosen arbitrarily, these values are grounded in the principle that a higher ε promotes exploration, while a lower ε favours exploitation [23]. Additionally, the decay strategy enables us to observe the gradual transition from exploration to exploitation over time."
For the UCB policy, the c parameter was set to 0.3, 0.5, 3.0, and 5.0, based on the general guideline that c controls exploration intensity, with larger values leading to greater uncertainty-driven exploration. These values were selected arbitrarily, but they cover a range of confidence levels to assess their impact on decision-making. Therefore, the following changes have been made to the text (line 446-449):
"b) Upper Confidence Bound policy tool: c values (hyperparameters) arbitrarily defined as 0.3, 0.5, 3.0, and 5.0. These values were also selected at ramdon, but they span a range of confidence levels to evaluate their impact on decision-making, with larger values encouraging greater exploration driven by increased uncertainty [24]."
Regarding Thompson Sampling, it was not required hyperparameter tuning, as it inherently balances exploration and exploitation through its Bayesian framework.
2.The experiment compares only two strategies, neglecting traditional approaches like Zigzag and Contour. Including these would provide a more robust validation of the algorithm's benefits.
Authors: This work focuses solely on improving Pixel-based strategies. Regarding the comparison with traditional approaches, such as Zigzag and Contour, this was addressed in a previous article (numbered 14 in the revised version: Ferreira, R.P.; Vilarinho, L.O; Scotti, A. Enhanced-pixel strategy for wire arc additive manufacturing trajectory planning: operational efficiency and effectiveness analyses. Rapid Prototype. J. 2024. https://doi.org/10.1108/RPJ-12-2022-0413.), which is cited in lines 211-216 at the beginning of Section 3.
"As mentioned above, Ferreira and Scotti introduced the Enhanced-Pixel strategy [14] as a novel generation of the Pixel strategy family, a filling space approach for path planning generation. The Basic-Pixel strategy was briefly described in section 1, but details are presented in Ferreira and Scotti [13]. In these publications, the performance of the Pixel strategy family was comparatively assessed and already validated with traditional strategies (non-space-filing approaches, such as Zigzag and Contour)"
3.Certain figure captions and annotations, particularly for multi-layered parts, are unclear. Improved sizing and precise labeling would enhance comprehension.
Authors: Thanks for noting. We have revised the figures to improve their size and added more precise labelling to enhance clarity and comprehension. Please check new Figures 8, 9 and 10.
- Relying on just three-part shapes limits the evaluation of the algorithm's adaptability. Testing a wider variety of geometries would better demonstrate its stability and versatility.
Authors: We agree with the reviewer's comment regarding the algorithm's adaptability. However, to make this study feasible, the authors planned to design the experiments using only three parts. Keeping in mind the need of represent different situations (focusing on a computational study), the three parts were systematically chosen, as pointed out by the author in lines 425-427, that is, "Three parts (printable pieces) with different shapes (to minimise chances of bias), presented in Figure 4 with their respective node numbers (a minimal number of nodes needed to discretise the sliced plan), were studied to evaluate a possible computational advantage of Advanced-Pixel". In addition, a dissimilar part was printed as validation (case study). However, these authors' care does not exclude the possibility of conducting additional studies that involve a more significant number of geometries. To acknowledge this limitation, we have included a warning at the end of the Conclusions section, as follows:
As future work, there is potential for conducting additional studies involving a larger variety of geometries, as well as exploring the use of more policy tools for working with MAB. This opens the possibility of incorporating other reinforcement learning algorithms. Additionally, the research group plans to investigate the use of clustering techniques to enhance the performance of the Advanced-Pixel strategy further.
- The conclusion highlights advantages but omits future work, such as enhancing efficiency, testing in complex scenarios, or integrating additional AI methods.
Authors: Thank you for your advice. We also addressed this point at the end of the Conclusions section, as already shown when answering the reviewer's comment #4.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsI appreciate that the authors have addressed most of my points. However, while I understand that you may not fully agree with my comments, I believe that, to enhance readability and comprehension for readers interested in the topic and familiar with related areas such as reinforcement learning, Section 3 should be revised.
I mean no offence, but despite my background in computer science and experience with related topics (WAAM, AI, etc), I find this section difficult to follow. Therefore, even if you disagree, I strongly encourage you to improve its clarity to make the paper more accessible to engaged readers like myself, who may enjoy the content. To achieve this, I suggest incorporating algorithm tables and restructuring the section to improve clarity and overall quality. While uninterested readers can skip it, those who are should be able to follow it easily. Moreover, this was the main point of major revision in the first round, yet only a few adjustments have been made to the paper.
Author Response
We sincerely appreciate the time and effort dedicated by the reviewers in evaluating our manuscript after the first round of recommendations. The extra insightful comments for a second round and constructive suggestions will undoubtedly improve the quality and clarity of our work even further. We have carefully addressed each point and revised the manuscript accordingly.
Reviewer #1
Comment:
- I appreciate that the authors have addressed most of my points. However, while I understand that you may not fully agree with my comments, I believe that, to enhance readability and comprehension for readers interested in the topic and familiar with related areas such as reinforcement learning, Section 3 should be revised. I mean no offence, but despite my background in computer science and experience with related topics (WAAM, AI, etc), I find this section difficult to follow. Therefore, even if you disagree, I strongly encourage you to improve its clarity to make the paper more accessible to engaged readers like myself, who may enjoy the content. To achieve this, I suggest incorporating algorithm tables and restructuring the section to improve clarity and overall quality. While uninterested readers can skip it, those who are should be able to follow it easily. Moreover, this was the main point of major revision in the first round, yet only a few adjustments have been made to the paper.
Authors: Considering the reviewers' remarks, the authors decided to follow his recommendation and insert algorithm tables with pseudocodes along with corresponding explanations. As a result, the entire section 3 was reformulated, as shown in red in the revised version.
Reviewer 2 Report
Comments and Suggestions for AuthorsAccept is suggested.
Author Response
We sincerely appreciate the time and effort the reviewers dedicated to evaluating our manuscript after the first round of recommendations.
Thanks for this reviewer's understanding of our responses.
Round 3
Reviewer 1 Report
Comments and Suggestions for AuthorsThank you, now the article is ready for publishing.