Combination of a Rabbit Optimization Algorithm and a Deep-Learning-Based Convolutional Neural Network–Long Short-Term Memory–Attention Model for Arc Sag Prediction of Transmission Lines
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors1. The literature review focuses on machine learning algorithms, not on arc sag prediction of transmission lines. More literature related to this topic should be discussed.
2. The contents of the Section of Materials and Methods are well-known and can be found in textbooks or websites. What is the novelty of the proposed method?
3. The definitions of variables N and Li are not explained.
4. "CLA" is not officially defined in the text. SSA, NGO, CPO, MRTPP are not defined.
5. What is the unit for the horizontal axis in Figs. 13 and 14?
6. Based on the results in Tables 3 and 4, the performance improvement seems to come from Attention, by looking at the differences between CLA and CNN-LSTM. It would be worth comparing CLA with CNN-Attention and LSTM-Attention, respectively.
7. Based on the results in Table 6, ARO-CLA performs better than the other methods. What is the price for using ARO-CLA, in terms of CPU time or some other aspects?
8. The RMSE, MAE, and MedAE units should be labeled in Tables and Figures.
9. How many transmission lines were used in the dataset? The lengths and specifications of the transmission lines should be given and explained.
10. Variables in Equations (14)-(16) should be defined.
11. It is noted that negative values appear in Figures 13 and 14. Why? What is the base value? What does zero value mean in these figures?
12. Subsection 2.6 is a vital part of this work. The illustration for the arc sag prediction model is not ample.
Author Response
Comments 1: The literature review focuses on machine learning algorithms, not on arc sag prediction of transmission lines. More literature related to this topic should be discussed.
Response 1: Thank you for pointing this out. I agree with this comment. Therefore, I have incorporated additional literature related to arc sag prediction of transmission lines to enhance the comprehensiveness of the literature review.[ This change can be found in lines 700 to 713] |
Comments 2: The contents of the Section of Materials and Methods are well-known and can be found in textbooks or websites. What is the novelty of the proposed method?
Response 2: Thank you for this valuable feedback. I agree that some foundational concepts in the Materials and Methods section are well-documented in existing resources. However, the novelty of our proposed method lies in the integration of the Adaptive Rabbit Optimization Algorithm (AROA) with a CNN-LSTM-Attention model specifically tailored for arc sag prediction in transmission lines. This combination is unique in that it leverages AROA to dynamically optimize the hyperparameters of the CNN-LSTM-Attention model, which enhances prediction accuracy and robustness across varying conditions. Additionally, the AROA algorithm’s adaptive adjustments based on population diversity introduce a novel way of balancing global and local exploration, which is particularly beneficial in avoiding local optima in complex, nonlinear time-series data like arc sag. This innovative approach can be found in the revised manuscript, particularly in lines.[117-268]
Comments 3: The definitions of variables N and Li are not explained.
Response 3: Thank you for pointing this out. I agree with this comment. Therefore, I have added definitions and explanations for the variables N and Li within the algorithm section to clarify their meanings and roles. These additions can be found in lines 114 to 249 of the revised manuscript, ensuring that the variables are clearly defined and contribute to a better understanding of the algorithm's function.
Comments 4: "CLA" is not officially defined in the text. SSA, NGO, CPO, MRTPP are not defined.
Response 4: Thank you for your valuable comment. I agree with this observation. Therefore, I have made the following revisions: The definition of "CLA" is clearly stated in line 451 of the revised manuscript. The SSA and NGO algorithms are introduced in detail in lines 269 and 279, respectively. As for "CPO" and "MRTPP," these terms were mistakenly included in the manuscript and do not actually appear in the study. I have corrected this error and removed any references to them. These changes ensure that all algorithms are appropriately defined and that any incorrect references have been addressed.
Comments 5: What is the unit for the horizontal axis in Figs. 13 and 14?
Response 5:Thank you for your comment. The unit for the horizontal axis in Figures 13 and 14 is time (hours).
Comments 6: Based on the results in Tables 3 and 4, the performance improvement seems to come from Attention, by looking at the differences between CLA and CNN-LSTM. It would be worth comparing CLA with CNN-Attention and LSTM-Attention, respectively.
Response 6: Thank you for your insightful comment. I agree that comparing the performance of CLA with CNN-Attention and LSTM-Attention would provide valuable insights. To address this, I have included additional experiments comparing CLA with CNN-Attention and LSTM-Attention models. The results and analysis of these comparisons can be found in line 585 of the revised manuscript. These additions provide a more comprehensive evaluation of the model performance and further highlight the contribution of the Attention mechanism.
Comments 7: Based on the results in Table 6, ARO-CLA performs better than the other methods. What is the price for using ARO-CLA, in terms of CPU time or some other aspects?
Response 7: Thank you for your thoughtful question. While the ARO-CLA model demonstrates superior performance in terms of prediction accuracy, as shown in Table 6, it is important to consider the computational cost associated with using ARO-CLA. The main price for using ARO-CLA lies in its increased computational complexity, particularly due to the Rabbit Optimization Algorithm (ARO) used for hyperparameter optimization. The ARO algorithm involves iterative search processes to fine-tune the parameters, which can lead to longer CPU processing times compared to simpler models like CLA, LSTM, or CNN-LSTM. Specifically, the ARO optimization process typically requires more iterations to converge to the optimal solution, which may increase the total training time. However, despite the higher computational cost, the improvements in prediction accuracy, robustness, and generalization performance provided by ARO-CLA outweigh the additional computational resources required.In terms of CPU time, the optimization process of ARO is typically more time-consuming, as shown in the figures illustrating the ARO optimization steps (Figures 10 and 11), which display the number of iterations and the corresponding fitness values. While the exact CPU time will vary depending on the specific computational environment and dataset, the trade-off between accuracy and computational cost is a key consideration when using the ARO-CLA model.
Comments 8: The RMSE, MAE, and MedAE units should be labeled in Tables and Figures.
Response 8: Thank you for your comment. I agree that the units for RMSE, MAE, and MedAE should be clearly labeled in the tables and figures. Therefore, I have added the appropriate units to the corresponding tables and figures in the revised manuscript. These updates ensure that the units for RMSE, MAE, and MedAE are clearly indicated for better clarity and understanding. You can find the updated units in the tables and figures throughout the manuscript.
Comments 9: How many transmission lines were used in the dataset? The lengths and specifications of the transmission lines should be given and explained.
Response 9: Thank you for your comment. In the dataset used for this study, only one transmission line was employed. The details of this transmission line, including its specifications and characteristics such as length and other parameters, are provided in the relevant tables. These specifications can be found in the manuscript, specifically in Table 1, which is referenced in line 495 of the revised manuscript
Comments 10: Variables in Equations (14)-(16) should be defined.
Response 10: Thank you for your comment. I agree that the variables in Equations (14)-(16) should be clearly defined. To address this, I have revised the manuscript to include definitions for the variables in these equations. The updated definitions can be found in line 528 of the revised manuscript, where I have explicitly defined each variable to ensure clarity for the readers.
Comments 11: It is noted that negative values appear in Figures 13 and 14. Why? What is the base value? What does zero value mean in these figures?
Response 11: Thank you for your thoughtful question. The negative values observed in Figures 13 and 14 may be attributed to the influence of wind deflection on the transmission line. Specifically, strong winds can cause the transmission line to deviate or sag in an unexpected manner, leading to negative arc sag values in certain conditions. These fluctuations are a result of the dynamic interaction between the transmission line and environmental factors, such as wind speed and direction.Regarding the base value, the zero value in these figures represents the reference point or the expected arc sag under normal conditions (i.e., without any external disturbances like wind or temperature changes). When the arc sag is above or below this reference point, it indicates how much the transmission line's sag deviates from the expected value due to various factors, including environmental influences or measurement noise.In summary, the negative values in the figures are a result of external factors, such as wind deflection, that affect the transmission line's behavior, while the zero value serves as the baseline representing normal conditions.
Comments 12: Subsection 2.6 is a vital part of this work. The illustration for the arc sag prediction model is not ample.
Response 12: Response 12: Thank you for your comment. I agree that Subsection 2.6 is a vital part of this work, and I understand the need for a more detailed illustration of the arc sag prediction model. To address this, I have provided a more comprehensive description of the model in the revised manuscript. This updated and expanded explanation can be found in line 451, where I have elaborated on the components and workings of the arc sag prediction model to provide a clearer and more thorough understanding.
|
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for Authors1- Why did you choose to combine the Rabbit Optimization Algorithm (ROA) specifically with the CNN+LSTM+Attention model? Could other optimization algorithms or deep learning architectures achieve similar results?
2-Could you clarify the exact role of the Rabbit Optimization Algorithm in the model? For instance, is it optimizing hyperparameters, feature selection, or something else?
3-How sensitive is the model to changes in parameters? Could you describe how you tuned hyperparameters for both the deep learning model and the Rabbit Optimization Algorithm?
4-Given that you use CNN, LSTM, and Attention, how interpretable are the predictions? Does the model offer insight into what factors most strongly influence sag prediction?
5-How well does the model generalize to transmission lines in different geographical regions or with different operational parameters?
Comments on the Quality of English LanguageEnglish needs to improve
Author Response
Comments 1: Why did you choose to combine the Rabbit Optimization Algorithm (ROA) specifically with the CNN+LSTM+Attention model? Could other optimization algorithms or deep learning architectures achieve similar results?
|
Response 1: Thank you for your question. The decision to combine the Adaptive Rabbit Optimization Algorithm (AROA) with the CNN+LSTM+Attention model was motivated by the need for a powerful optimization mechanism to effectively fine-tune the hyperparameters of a complex deep learning architecture. In this study, the use of AROA is an innovative approach, as previous works have not clearly demonstrated how this algorithm can enhance model optimization for arc sag prediction. AROA is particularly suitable due to its dynamic balance between global exploration and local exploitation, which helps prevent convergence to local optima. This is especially valuable when optimizing complex, nonlinear models like the CNN+LSTM+Attention architecture for high-dimensional, nonlinear time-series data, such as arc sag prediction. Compared to other optimization algorithms, such as Genetic Algorithms or Particle Swarm Optimization, AROA was selected for its efficiency in handling complex search spaces and adaptability to time-series forecasting. By simulating the natural foraging and hiding behaviors of rabbits, AROA achieves an intelligent balance between global exploration and local exploitation through its adaptive energy factor. This enhances convergence speed, improves predictive accuracy, and supports fine-tuning of critical hyperparameters, such as learning rate and LSTM unit count, while avoiding early convergence. As for the deep learning architecture, while other combinations like CNN+GRU or CNN+RNN could also be considered, the CNN+LSTM+Attention model is particularly powerful for time-series predictions due to CNN's ability to capture local features, LSTM's capability for long-term dependencies, and the Attention mechanism's focus on significant temporal features. The Attention layer further enhances interpretability, enabling the model to prioritize critical data points, which is essential for complex, noisy, and nonlinear data like arc sag. In summary, while other optimization algorithms and architectures might achieve similar results, this study introduces AROA as a novel approach to hyperparameter tuning within the CNN+LSTM+Attention model, demonstrating its effectiveness in optimizing the model and enhancing prediction performance. This combination represents an ideal choice for this study and fills a gap in previous literature where AROA's application was not explicitly detailed.
Comments 2: Could you clarify the exact role of the Rabbit Optimization Algorithm in the model? For instance, is it optimizing hyperparameters, feature selection, or something else?
Response 2: Thank you for your question. The Adaptive Rabbit Optimization Algorithm (AROA) plays a critical role in enhancing the performance of the CNN+LSTM+Attention model by specifically focusing on optimizing the hyperparameters of the model. The AROA algorithm dynamically balances global exploration and local exploitation through foraging and hiding behaviors, which helps in effectively searching the complex hyperparameter space. This approach prevents the model from converging prematurely to local minima, thus improving prediction accuracy and stability. In this study, AROA was chosen because it not only improves convergence speed but also enhances the model's generalization capabilities, which are essential for accurate arc sag prediction under various operational conditions. While AROA optimizes parameters like learning rate, the number of LSTM units, and attention weights, it could also be adapted to handle feature selection if necessary. However, the primary focus here was on hyperparameter tuning to maximize the efficiency and accuracy of the CNN+LSTM+Attention model.
Comments 3: How sensitive is the model to changes in parameters? Could you describe how you tuned hyperparameters for both the deep learning model and the Rabbit Optimization Algorithm?
Response 3: Thank you for your question. The model’s sensitivity to changes in parameters is a key consideration, particularly given the nonlinear and complex nature of arc sag prediction. To ensure optimal performance, we systematically tuned the hyperparameters of both the deep learning model (CNN+LSTM+Attention) and the Adaptive Rabbit Optimization Algorithm (AROA) through a series of iterative experiments. For the deep learning model, we adjusted parameters including the number of CNN filters, LSTM units, attention layer dimensions, learning rate, and batch size. Sensitivity analysis revealed that the learning rate and number of LSTM units significantly impacted model performance, particularly in terms of convergence speed and prediction accuracy. Higher learning rates led to faster convergence but increased the risk of overshooting, while lower learning rates improved stability but slowed down training. By testing various configurations, we found that a moderate learning rate paired with a balanced number of LSTM units offered the best performance. For AROA, key parameters such as population size, iteration count, and the adaptive energy factor were tuned. These parameters govern the balance between global exploration and local exploitation, which is crucial for avoiding premature convergence. We found that increasing the iteration count and fine-tuning the adaptive energy factor allowed the algorithm to better explore the search space and avoid local minima. The population size was also tested for optimal balance; larger populations increased search diversity but extended computational time, while smaller populations reduced computation time but risked lower diversity. In summary, both the deep learning model and AROA showed sensitivity to parameter adjustments, particularly in terms of learning rate, LSTM unit count, iteration count, and population size. Hyperparameter tuning was conducted through systematic testing to identify the configurations that maximize predictive accuracy and model robustness for arc sag prediction.
Comments 4: Given that you use CNN, LSTM, and Attention, how interpretable are the predictions? Does the model offer insight into what factors most strongly influence sag prediction?
Response 4: Thank you for this question. The use of CNN, LSTM, and Attention mechanisms in the model enhances interpretability, allowing us to gain insights into the factors influencing sag prediction. The Attention layer, in particular, plays a crucial role in improving interpretability by assigning weights to various time steps and input features, which helps the model focus on the most relevant parts of the data for making accurate predictions. The Attention mechanism provides a view into which factors and moments in the time series data are most critical for predicting arc sag. For instance, it can highlight whether certain time periods, such as high wind speeds or rapid temperature changes, are particularly influential. By analyzing the attention weights, we gain a better understanding of how factors like temperature, load, and environmental conditions (e.g., wind speed and humidity) contribute to changes in arc sag. This insight allows us to interpret the model's predictions in a more transparent manner, as we can observe which input variables have higher weights, indicating their relative importance in the prediction process. In summary, the model not only offers high predictive accuracy but also provides interpretable outputs through the Attention mechanism, which helps identify the most influential factors in arc sag prediction. This interpretability is essential for practical applications, as it allows operators to understand and trust the model's predictions and adjust maintenance or operational strategies accordingly.
Comments 5: How well does the model generalize to transmission lines in different geographical regions or with different operational parameters?
Response 5: Response: Thank you for your question. Generalization is an important aspect of this model, especially considering the potential variations in transmission lines across different geographical regions or operational conditions. To test the model's generalizability, we designed experiments with diverse input data reflecting variations in environmental conditions (e.g., temperature, wind speed, humidity) and operational parameters (e.g., voltage levels, conductor type). Results indicate that the CNN+LSTM+Attention model, optimized with the Adaptive Rabbit Optimization Algorithm (AROA), maintains robust predictive accuracy across varying conditions. The CNN component helps capture local features consistently, while the LSTM is adept at handling temporal dependencies, which remain similar despite regional changes. Additionally, the Attention layer allows the model to dynamically focus on critical time points or features most relevant to sag prediction, even when faced with different operational settings or environmental conditions. However, when applying the model to completely new geographical areas or significantly different operational parameters, some retraining may be necessary to capture region-specific factors. For instance, lines in areas with extreme climate differences may require fine-tuning to incorporate localized weather patterns or unique conductor materials. Generally, the AROA-driven hyperparameter tuning enhances the model's flexibility, allowing it to adapt well to various scenarios, although minor adjustments can improve performance for significantly different contexts. In summary, the model demonstrates strong generalization capabilities, with potential for adaptation to new regions or conditions with minimal retraining, ensuring it remains practical for broad deployment across diverse transmission systems.
|
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThis study is very interesting. However, several issues need to be addressed for it to become a strong paper. First, a precise definition of the dataset and a sample dataset should be provided, along with a detailed explanation of which columns were selected and what was predicted from this dataset. Additionally, it is unclear whether the proposed algorithm is entirely new or a modified version of an existing one. Specifically, there is no clear explanation of how ARO combines with LSTM for hyperparameter tuning. The paper should address these aspects to improve its quality.
1. The terms ARO - Adaptive Rabbit Optimization Algorithm and ROA-Rabbit Optimization Algorithm are a bit confusing for users – So, how about changing your term to AROA and using them correctly in the document? Because your term ARO seems to be an Adaptive Optimization Algorithm without Rabbit, it is confused with the term ROA. Exiting Method : ROA(Rabbit Optimization Algorithm), Your approach: AROA(Adaptive Rabbit Optimization Algorithm).
2. You have mentioned the data set, real-time monitored climate data, and arc sag measurements from 2021 to 2022. Columns include Voltage, Temp, conductor type, tension, tower height, line length, wind speed, direction, and humidity. For readers, providing a sample data set would be clearer instead of just stating column names with a total number of records. additionally, show your preprocessing step including normalization, how missing data has been addressed, and more.
3. For development environments, you mention that they have been developed on AMD Ryzen. Is this machine installed with a GPU? Please specify the installed GPU if you have one.
4. What does CLA mean? Do you mean the Cognitive Rabbit Algorithm or the Cognitive Rabbit Approach?
5. You have Proposed ARO; how did you transform ROA to ARO ?? There is no specific logic or formula for how to make this. Did you propose a new method? or you have try to test with those algorithms. In such cases, careful attention must be given to expressions. Clearly stating phrases like "We devised a new method," "We attempted a new approach," or "We modified the existing logic" is essential.
6. How did you apply the update parameter in FIGURE 7? ARO optimized CLA? It would be helpful if you provided a more detailed mechanism, “Update Hyper-parameter to Optimize CNN-LSTM-Attention.”
7. There are a lot of Algorithms that appear in your document without clearly mentioning their characteristics for ARO-CLA, SSA-CLA, and NGO-CLA.
8. In Table 6, you mentioned 1 time step. It would be better to provide whole-time steps.
9. Figure 13 is a very nice figure. It would be better to provide all values if you only have 100 data points, in the form of a table. Can you provide more data points? 100 seems like a small set.
10. How can you say ARO-CLA is most effective? You should provide the overall average of the results in Table 6. Would it be the total average?
11. You mentioned in the abstract that your ARO was used for LSTM parameter tuning, and then you explained why your LSTM is not performing better than ARO-CLA. Did you propose ARO-LSTM or ARO-CLA?
12. Please add a contribution section that clarifies the purpose of this study and its contributions.
13. Lastly, if In technical writing, providing algorithms or pseudocode whenever possible can greatly aid readers' understanding.
Author Response
Response 1: Thank you for your feedback on terminology clarity. I agree that the distinction between ROA (Rabbit Optimization Algorithm) and ARO (Adaptive Rabbit Optimization Algorithm) could be confusing. To address this, I have updated the terminology throughout the document to use AROA (Adaptive Rabbit Optimization Algorithm) consistently when referring to our approach. This adjustment helps to clearly distinguish our proposed method from the traditional ROA, avoiding potential misunderstandings. |
|
This revised terminology can be found in all relevant sections of the document to ensure clarity for readers. Thank you for your suggestion, which has improved the readability and precision of the manuscript.
Response 2: Thank you for your suggestion. I have addressed this feedback by adding a sample dataset in the form of a table in the revised manuscript to give readers a clearer understanding of the data structure, including key columns such as Voltage, Temperature, Conductor Type, Tension, Tower Height, Line Length, Wind Speed, Direction, and Humidity. Additionally, I have detailed the data preprocessing steps in the text
Comments 3: For development environments, you mention that they have been developed on AMD Ryzen. Is this machine installed with a GPU? Please specify the installed GPU if you have one.
Response 3: Thank you for your question. I have clarified this in the revised manuscript by specifying the installed GPU in the development environment. This information has been added to the table in line 497, where I have detailed the hardware specifications, including the GPU installed on the AMD Ryzen machine used in this study. This addition ensures readers have a complete understanding of the computational resources involved.
Comments 4: What does CLA mean? Do you mean the Cognitive Rabbit Algorithm or the Cognitive Rabbit Approach?
Response 4: Thank you for your question. In this study, "CLA" is an abbreviation for CNN-LSTM-Attention rather than referring to the Cognitive Rabbit Algorithm or Cognitive Rabbit Approach. I have clarified this terminology in the revised manuscript to prevent any potential confusion. This update ensures that the abbreviation is clearly understood in the context of the model used.
Comments 5: You have Proposed ARO; how did you transform ROA to ARO ?? There is no specific logic or formula for how to make this. Did you propose a new method? or you have try to test with those algorithms. In such cases, careful attention must be given to expressions. Clearly stating phrases like "We devised a new method," "We attempted a new approach," or "We modified the existing logic" is essential.
Response 5: Thank you for your question. The AROA algorithm used in this study is an optimized version of the ROA algorithm, designed to enhance its adaptability and performance for our specific application. I have clarified this in the revised manuscript, specifically in line 178, to ensure that it is clear we optimized the existing ROA
Comments 6: How did you apply the update parameter in FIGURE 7? ARO optimized CLA? It would be helpful if you provided a more detailed mechanism, “Update Hyper-parameter to Optimize CNN-LSTM-Attention.
Response 6: Thank you for your comment. I agree that a more detailed explanation of the update mechanism in Figure 7 would enhance understanding. To address this, I have provided a more comprehensive description of how the AROA algorithm optimizes the CNN-LSTM-Attention (CLA) model by updating key hyperparameters. This expanded explanation can be found in lines 250 and 451 of the revised manuscript. These additions clarify how AROA iteratively adjusts parameters like the learning rate and LSTM units to improve model performance, ensuring that readers gain a clearer insight into the optimization process.
Comments 7: There are a lot of Algorithms that appear in your document without clearly mentioning their characteristics for ARO-CLA, SSA-CLA, and NGO-CLA.
Response 7: Thank you for your comment. I agree that it is important to clarify the characteristics of the algorithms used in ARO-CLA, SSA-CLA, and NGO-CLA. To address this, I have added a detailed description of each algorithm’s characteristics and their specific contributions to the CLA model in line 635 of the revised manuscript. This addition provides readers with a clearer understanding of the unique features and strengths of each algorithm in optimizing the CNN-LSTM-Attention model.
Comments 8: In Table 6, you mentioned 1 time step. It would be better to provide whole-time steps.
Response 8: Thank you for your suggestion. I agree that including the whole time steps in Table 6 would provide a more complete understanding. To address this, I have added the whole time steps in the revised manuscript to ensure clarity and comprehensiveness. This addition offers readers a full view of the time series used in the analysis, enhancing the interpretability of the results.
Comments 9: Figure 13 is a very nice figure. It would be better to provide all values if you only have 100 data points, in the form of a table. Can you provide more data points? 100 seems like a small set.
Response 9: Thank you for your suggestion. I agree that increasing the data points would enhance the figure's detail. To address this, I have updated Figure to display 200 data points instead of the original 100, providing a more comprehensive view. This change allows readers to observe more granular patterns and trends in the data, as reflected in the revised manuscript.
Comments 10: How can you say ARO-CLA is most effective? You should provide the overall average of the results in Table 6. Would it be the total average?
Response 10: Thank you for your question. The effectiveness of AROA-CLA can be comprehensively evaluated by considering the four key performance metrics—RMSE, MAE, R², and MedAE—presented in Table 6. Each of these metrics highlights different aspects of model performance: RMSE indicates the average magnitude of prediction errors, with lower values reflecting higher accuracy; MAE measures the absolute difference between predictions and actual values, where a lower MAE suggests greater precision; R² assesses the proportion of variance explained by the model, with values closer to 1 representing a better fit; and MedAE, which is less sensitive to outliers, shows the model’s accuracy even in noisy data. By calculating the overall average across these metrics, we obtain a comprehensive benchmark that illustrates AROA-CLA's superior performance in optimizing all four evaluation criteria, supporting its effectiveness in arc sag prediction. This analysis has been added to the revised manuscript to provide a clearer justification of AROA-CLA's performance advantages.
Comments 11: You mentioned in the abstract that your ARO was used for LSTM parameter tuning, and then you explained why your LSTM is not performing better than ARO-CLA. Did you propose ARO-LSTM or ARO-CLA?
Response 11: Thank you for your question. In this study, we proposed AROA-CLA (Adaptive Rabbit Optimization Algorithm applied to CNN-LSTM-Attention) rather than AROA-LSTM. The purpose of AROA is to optimize the hyperparameters of the entire CLA model, which includes CNN, LSTM, and Attention components, rather than tuning only the LSTM layer. The CLA structure leverages the strengths of each component: CNN for capturing local features, LSTM for handling temporal dependencies, and Attention for focusing on critical time steps. This comprehensive approach enables AROA-CLA to achieve superior predictive performance compared to using LSTM alone or even AROA applied only to LSTM. I have revised the manuscript to correct this error and ensure that the scope and application of AROA in optimizing the full CLA model are clearly understood.
Comments 12: Please add a contribution section that clarifies the purpose of this study and its contributions.
Response 12: Thank you for your suggestion. I have added a contribution section to the revised manuscript, specifically in lines 99 to 113, to clarify the purpose of this study and highlight its main contributions. This section provides readers with a clear understanding of the study's objectives and the unique value it brings to the field, particularly regarding the optimization of the CNN-LSTM-Attention model using the Adaptive Rabbit Optimization Algorithm (AROA) for improved arc sag prediction.
Comments 13: Lastly, if In technical writing, providing algorithms or pseudocode whenever possible can greatly aid readers' understanding.
Response 13: Thank you for your suggestion. I agree that including algorithms or pseudocode can enhance understanding. To address this, I have added more detailed descriptions in the revised manuscript to clarify the implementation process and methodology. These additions aim to improve clarity and provide readers with a more comprehensive understanding of the technical aspects of this study.
|
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for Authors1. The response to my first comment in the previous review could be better. Although additional literature related to arc sag prediction was added to the references, they should be discussed in the Introduction in detail, not just quoting the reference numbers. Therefore, a serious discussion about the latest research in the field of arc sag prediction should be included in the Introduction; otherwise, this paper is incomplete. This paper needs to provide insight into understanding arc sag prediction methods.
2. Only one transmission line is used to demonstrate the methodology performance is not enough.
3. Table 1 should be mentioned in the text. The temperature in Table 1 is around minus 20 degrees Celsius. Why? Is this normal.
Author Response
Comments 1: The response to my first comment in the previous review could be better. Although additional literature related to arc sag prediction was added to the references, they should be discussed in the Introduction in detail, not just quoting the reference numbers. Therefore, a serious discussion about the latest research in the field of arc sag prediction should be included in the Introduction; otherwise, this paper is incomplete. This paper needs to provide insight into understanding arc sag prediction methods.
Response 1: Thank you for your feedback. I appreciate your suggestion regarding the inclusion of a more detailed discussion on arc sag prediction in the Introduction. In response, I have incorporated a thorough discussion of the latest research in this field. The relevant content has been added between lines 43 to 70 in the revised manuscript. This section provides an in-depth analysis of the arc sag prediction methods and offers insights into the current advancements in the field.
Comments 2: Only one transmission line is used to demonstrate the methodology performance is not enough.
Response 2: Thank you for pointing this out. I agree with this comment. Therefore, I have clarified the description regarding the high-voltage transmission lines. As stated in the revised manuscript, "Ultra-high voltage transmission lines are generally very long and have frequent towers, with one located every few meters. Although there is only a single line, the arc sag data is derived from measurements between several towers.
Comments 3: Table 1 should be mentioned in the text. The temperature in Table 1 is around minus 20 degrees Celsius. Why? Is this normal.
Response 3: The temperatures mentioned in this paper refer to ambient temperatures in an area located in northeastern China. The temperature data collection began on New Year’s Day, corresponding to the time when the data was obtained. During the winter months in northeastern China, particularly around New Year's Day, the ambient temperature typically drops below -20 degrees Celsius. This represents a typical winter temperature and falls within the normal range of low-temperature conditions for the region.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThank you for kindly explaining and updating your manuscript. Upon reviewing all the questions again, I can see that your manuscript has been significantly improved.
1. Term AROA has been updated
2. Data set description Table 1 data sheet provided
3. RTX 4070 Ti Super added
4. Term CLA revised
5. ROA Clarified
6. Figure 7, has been changed
7. Provided detailed algorithm
8. Whole time step added
9. Figure for whole time step figures added
10. ARO-CLA Explained
11. The effectiveness of AROA-CLA has been explained
12. Contribution sections added
Author Response
5. Additional clarifications |
Thank you for taking the time to read my paper and provide your valuable feedback. Your suggestions have been extremely helpful, and I will carefully consider them to further improve the manuscript. Once again, thank you for your support and guidance. |
Author Response File: Author Response.pdf
Round 3
Reviewer 1 Report
Comments and Suggestions for AuthorsNo further comments.