Life Insurance Fraud Detection: A Data-Driven Approach Utilizing Ensemble Learning, CVAE, and Bi-LSTM
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors(1) the novelty and scientific contribution are limited:
The combination of CVAE and Bi-LSTM for fraud detection is interesting but not novel, as ensemble/hybrid models combining deep sequential models and decision trees (e.g., RF, XGBoost) have been explored in multiple prior works. The chaotic perturbation applied to CVAE is insufficiently justified theoretically and lacks empirical ablation. The authors need to moderate the claims of novelty and explicitly discuss how your method advances beyond existing hybrid models in fraud detection. A more comprehensive literature review is needed.
(2) The paper uses a custom synthetic dataset. While fraud datasets are indeed difficult to access, the paper offers no external validation, and no code or dataset sharing is suggested. Meanwhile, the manuscript emphasizes accuracy, which is not suitable for imbalanced datasets.
(3) No comparison is made to traditional models like XGBoost, LightGBM, logistic regression, or cost-sensitive SVMs. And there is no ablation study to show the importance of chaotic perturbation or the added fraud prediction score to the random forest.
(4) Mathematical Consistency: Many equations (e.g., for BCE loss, CVAE reconstruction) are either notations copied verbatim or improperly rendered. Some are redundant or incorrect (e.g., BCE equation syntax).
Comments on the Quality of English LanguageThe English could be improved to more clearly express the research.
Author Response
- the novelty and scientific contribution are limited:
The combination of CVAE and Bi-LSTM for fraud detection is interesting but not novel, as ensemble/hybrid models combining deep sequential models and decision trees (e.g., RF, XGBoost) have been explored in multiple prior works. The chaotic perturbation applied to CVAE is insufficiently justified theoretically and lacks empirical ablation. The authors need to moderate the claims of novelty and explicitly discuss how your method advances beyond existing hybrid models in fraud detection. A more comprehensive literature review is needed.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe manuscript presents a novel hybrid method for detecting fraud in life insurance using CVAE, Bi-LSTM, and RF+Bi-LSTM models. The approach of generating synthetic data, employing SMOTE for balancing, and conducting a comparative performance analysis focusing on recall and F1-score is noteworthy. The manuscript's structure is well-designed and makes a significant contribution to the field of AI in fraud detection.
Nevertheless, several areas need attention:
- Language Quality: The manuscript contains grammatical and syntactical errors that require careful proofreading.
- Methodological Clarity: The explanation of the algorithmic steps, particularly for the hybrid model, could be more concise.
- Figures/Tables: Some figures and tables lack appropriate legends, axis labels, or comprehensive captions.
- Results Discussion: It would be beneficial to consolidate overlapping insights to enhance readability.
Conclusion Enhancement: The conclusions could be strengthened by including practical implications or guidance for deployment.
Author Response
- Language Quality:The manuscript contains grammatical and syntactical errors that require careful proofreading.
Author Response:
We sincerely thank the reviewer for pointing this out. We have thoroughly revised the manuscript for grammatical, syntactical, and typographical errors. The entire manuscript has been carefully proofread and edited to improve overall language quality and readability. We believe the revised version now meets the required linguistic standards.
- Methodological Clarity:The explanation of the algorithmic steps, particularly for the hybrid model, could be more concise.
Author Response:
Thank you for your valuable feedback. We have revised the methodology section to enhance clarity and conciseness, especially in the explanation of the hybrid model. The algorithmic steps are now streamlined with improved structure and flow to avoid redundancy while preserving technical accuracy.
- Figures/Tables:Some figures and tables lack appropriate legends, axis labels, or comprehensive captions.
Author Response:
Thank you for highlighting this important aspect. We have thoroughly reviewed all figures and tables in the manuscript. Missing legends and axis labels have been added, and captions have been rewritten to be more descriptive and self-explanatory. These revisions aim to ensure that each figure and table is clear, informative, and understandable without needing to refer back to the main text.
- Results Discussion:It would be beneficial to consolidate overlapping insights to enhance readability.
Author Response:
We appreciate the reviewer’s suggestion. In response, we have carefully reviewed the Results and Discussion section and consolidated overlapping insights to improve clarity and flow. Redundant points have been removed, and related findings have been grouped together to present a more cohesive and streamlined discussion. We believe these revisions significantly enhance the readability and interpretability of the results.
Conclusion Enhancement: The conclusions could be strengthened by including practical implications or guidance for deployment.
Author Response:
Thank you for this insightful suggestion. We have revised the conclusion section to include practical implications of our findings, highlighting how the proposed approach can be applied in real-world insurance fraud detection scenarios. Additionally, we have provided guidance on potential deployment strategies, including threshold tuning, integration with existing claim management systems, and risk-based claim prioritization. These additions aim to better connect our research outcomes with practical applications and industry relevance.
Reviewer 3 Report
Comments and Suggestions for AuthorsThis paper looks at how to detect life insurance fraud. The framework tests three different models, i.e., CVAE, Bi-LSTM, and a mix of Random Forest with Bi-LSTM. The paper is generally well written and clearly articulates the challenges in insurance fraud detection.
Strengths
The manuscript clearly explains that insurance fraud is a big problem and why old methods aren’t enough. It highlights its financial impact as well
The authors have addressed class imbalance which is a key issue in fraud detection datasets. They deal with this using SMOTE. They have also done comprehensive preprocessing.
The overall framework also makes sense. Introduction of a hybrid RF + Bi-LSTM model and its comparison against standalone CVAE and Bi-LSTM makes a decent contribution. I like the idea of combining the temporal learning of Bi-LSTM with the interpretability of Random Forests because it seems interesting for practical deployment.
The paper correctly emphasizes the importance of recall and F1-score over accuracy for imbalanced datasets. The use of ROC and Precision-Recall evaluation is also good.
Weaknesses
The data collection process (2.1) is written poorly. The only description used is "utilized the dataset from Scratch". There is no mention of whether this dataset is publicly available or how it can be accessed for reproducibility. “An initial pool of 120 features was generated” needs more elaboration on the process details to ensure its relevant and representative of real world. Although some of the features are listed in tables 1 and 2, I’d also like to see a table (as appendix) that lists the features and their relevance. Even better, if that dataset is good, make it available for the research community through Kaggle etc.
SMOTE was applied to balance the dataset to a 50:50 ratio before model training. Applying SMOTE to the entire dataset before splitting into training and testing sets is not recommended. This means that synthetic samples generated from the training set might appear in the test set, leading to overly optimistic performance metrics.
The manuscript also does not explain in detail how the ‘final 5’ were features were selected. Was it a simple ranking, a weighted combination, or a sequential process?
The author talk about the used methods but important details are missing. It is imperative to have number of layers, neurons per layer, activation functions, optimizers, learning rates, epochs, batch sizes, etc. mentioned for the models.
The abstract and results section consistently report very low recall values for all models (CVAE: 3.28%, Bi-LSTM: 5.98%, Hybrid: 5.98%). These recall rates are not acceptable for a real world solution. While the authors state CVAE "failed to detect many fraudulent cases," and Bi-LSTM had "missed 110 fraud cases," the magnitude of this failure is not discussed in terms of its practical implications.
The conclusion mentions "integration of reinforcement learning and attention-based mechanisms" as future research directions. While these are relevant areas, the connection to the current papers especially the low recall, is very weak.
Summary
The paper addresses an important and difficult problem and the proposed hybrid method is interesting. The methodology is generally good.
However, very low recall scores across all models are a serious issue. This needs a deeper and more critical discussion. Overall, the paper has potential, but it needs some major revisions addressing all the points mentioned in the weaknesses section.
Author Response
The data collection process (2.1) is written poorly. The only description used is "utilized the dataset from Scratch". There is no mention of whether this dataset is publicly available or how it can be accessed for reproducibility. “An initial pool of 120 features was generated” needs more elaboration on the process details to ensure its relevant and representative of real world. Although some of the features are listed in tables 1 and 2, I’d also like to see a table (as appendix) that lists the features and their relevance. Even better, if that dataset is good, make it available for the research community through Kaggle etc.
Dataset Availability (Now Shared as Supplementary File 1):
The complete synthetic dataset used in our experiments has been added to the submission as Supplementary File 1. It contains 4,000 samples and 83 curated features representing real-world insurance claim attributes
Feature Listing and Relevance (Provided in Supplementary File 2):
A full list of the original 120 generated features, including their names, types (numerical/categorical), and relevance to fraud detection, is included in Supplementary File 2 as a feature dictionary. This supports transparency and helps ensure that the dataset is representative of real-world insurance claim scenarios.
Public Sharing via Kaggle (Planned):
In addition to the supplementary files, we intend to release the dataset publicly on Kaggle following peer review and acceptance, so that it can be used by the broader research community
SMOTE was applied to balance the dataset to a 50:50 ratio before model training. Applying SMOTE to the entire dataset before splitting into training and testing sets is not recommended. This means that synthetic samples generated from the training set might appear in the test set, leading to overly optimistic performance metrics.
Response:
We sincerely thank the reviewer for pointing out this important concern. We would like to clarify that:
- The original dataset was constructed with a 15:85 fraud-to-genuine class imbalance, designed to reflect real-world insurance fraud distributions.
- Before applying SMOTE, we first split the datasetinto training (80%) and testing (20%) subsets.
- SMOTE was then applied only to the training set, thereby preventing any synthetic samples from leaking into the test set.
- The test set remained untouched and retained its original distribution, ensuring that all reported metrics reflect true generalization performancewithout bias.
This corrected workflow is now explicitly described in Section 3.3 (Data Preprocessing), and all model evaluation results (Section 5 and Table 2) were generated using this leakage-free protocol
The manuscript also does not explain in detail how the ‘final 5’ were features were selected. Was it a simple ranking, a weighted combination, or a sequential process?
Response:
We appreciate the reviewer’s interest in the feature selection methodology. The selection of the top 5 most influential features was based on a multi-stage process combining statistical relevance, model-based importance, and domain interpretability. Specifically, we followed these steps:
Initial Feature Set (83 Features):
After removing redundant and weakly correlated features from the original 120, a refined set of 83 was finalized based on mutual information, variance thresholding, and domain knowledge.
Random Forest Feature Importance:
A Random Forest classifier was trained on the imbalanced training dataset. The mean decrease in Gini impurity was used to compute the importance of each feature, providing a ranked list based on predictive power.
Top-10 Filtering:
From this ranking, the top 10 features were shortlisted.
Sequential Selection for Final 5:
A sequential forward feature selection approach was applied using the Bi-LSTM+RF hybrid model to evaluate combinations of the top 10 features. The best performing 5-feature subset (based on F1-score and recall) was selected as the “final 5.”
Interpretability Check:
To ensure practical relevance, we also verified that each selected feature had clear interpretability in fraud risk assessment (e.g., suspicious claim-to-premium ratio, high hospital bill variance, abnormal claim timing) .
The author talk about the used methods but important details are missing. It is imperative to have number of layers, neurons per layer, activation functions, optimizers, learning rates, epochs, batch sizes, etc. mentioned for the models.
We appreciate the reviewer’s emphasis on the need for comprehensive model configuration details. In response, we have now included a new section on Model Configuration and Training Setup, where we describe the architectural and training parameters used for each model — CVAE, Bi-LSTM, and the proposed hybrid RF + Bi-LSTM
The abstract and results section consistently report very low recall values for all models (CVAE: 3.28%, Bi-LSTM: 5.98%, Hybrid: 5.98%). These recall rates are not acceptable for a real world solution. While the authors state CVAE "failed to detect many fraudulent cases," and Bi-LSTM had "missed 110 fraud cases," the magnitude of this failure is not discussed in terms of its practical implications.
Response:
We thank the reviewer for this important observation. In response, we have now provided a detailed comparison of recall values across existing models and the proposed methods in Section 4.7. This comparative analysis includes performance metrics such as accuracy, recall, precision, and F1-score, with specific emphasis on the observed recall limitations.
Furthermore, we have explicitly discussed the real-world implications of low recall at the end of Section 4.7. This includes a reflection on the practical risks of missing fraudulent claims in operational insurance systems — such as potential financial losses, loss of trust, and regulatory issues. We have also clarified that although the hybrid RF + Bi-LSTM model provides balanced F1-score and interpretability, the low recall remains a critical limitation
The conclusion mentions "integration of reinforcement learning and attention-based mechanisms" as future research directions. While these are relevant areas, the connection to the current papers especially the low recall, is very weak.
Response:
We appreciate the reviewer’s observation regarding the weak linkage between the proposed future directions and the core issue of low recall in our study. To address this, we have removed the reference to "reinforcement learning and attention-based mechanisms" from the conclusion section.
Instead, the revised conclusion now focuses more directly on practical extensions and improvements that are strongly aligned with the limitations of this study — particularly the issue of low recall in highly imbalanced fraud detection scenarios. The revised text emphasizes future exploration of cost-sensitive learning techniques, advanced resampling strategies (e.g., focal loss or SMOTE variants), and explainable AI frameworks to enhance both detection performance and interpretability.
Author Response File: Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for AuthorsThank you for providing me with the opportunity to review your manuscript, "Life Insurance Fraud Detection: A Data-Driven Approach Utilizing Ensemble Learning, CVAE, and Bi-LSTM." The topic of insurance fraud detection is highly relevant, and your investigation into different modeling strategies is valuable. While the paper presents interesting work, I have identified several areas that, if addressed, could significantly enhance its clarity, rigor, and impact.
My primary concerns are detailed below, focusing on methodological justifications, clarity of presentation, and the depth of analysis regarding practical implications.
1. Clarity and Professionalism (Minor Inconsistencies/Typos)
The manuscript contains several minor inconsistencies and typographical errors that detract from its overall professionalism. For instance:
The term "FI score" is used in some instances while "F1-score" is used predominantly elsewhere. Consistency is important.
The phrase "Precision Recall covers" appears to be a typo and should likely be "Precision-Recall curves".
The abstract mentions "curated datasets of 4000 life insurance applications" while the introduction refers to an "unbalanced life insurance fraud dataset". While both might be true, consistent phrasing throughout would be beneficial.
In the description of the ROC curve, "normal gaining" is used instead of the standard "random guessing." Addressing these and other similar small errors would improve the readability and academic rigor of the paper.
2. Lack of Novelty in Problem Definition
The introduction effectively highlights the significant and well-known challenges of insurance fraud detection, including increasing fraudulent claims, class imbalance, and the complexity of fraudulent behavior. While these are indeed critical issues, the paper could strengthen its argument for novelty by articulating a more specific and unique gap that this study fills beyond broadly re-stating known problems. Many studies acknowledge these difficulties; a clearer explanation of how this specific combination of CVAE, Bi-LSTM, and the hybrid model distinctively addresses a previously un(der)explored facet of life insurance fraud detection would be beneficial.
3. Limited Feature Engineering/Selection Justification
The methodology section indicates that an initial pool of 120 features was generated, and then 83 features were "finalized based on predictive importance, domain relevance and data source availability" after "feature selection and correlation analysis". While the five most predictive features are listed (Claim Amount, Credit Score, Number of Prior Claims, Policy Type, Debt-to-income ratio) , the process of feature engineering itself and a more detailed justification for why the other 37 features were excluded beyond "correlation analysis" could be more robust. Providing insights into the criteria, thresholds, or specific methodologies (e.g., recursive feature elimination, permutation importance on a baseline model) used to narrow down the feature set would enhance the credibility of the data preparation stage.
4. Limited Ensemble Strategy for Hybrid Model Elaboration
The proposed hybrid RF + Bi-LSTM model is an interesting approach that combines the strengths of deep learning and ensemble methods. The core idea of concatenating Bi-LSTM's predicted probabilities with original features as input for the Random Forest is a valid strategy. However, Table 6, outlining the proposed architecture, mentions a "Fusion layer: Concentration on the majority of voting layer (MLP optimal)" , which is not clearly elaborated within the "Step-by-step Algorithms" for the hybrid model . This creates an ambiguity regarding the exact nature of the "fusion layer" and its contribution. Furthermore, given the goal of potentially improving results beyond marginal gains, exploring more advanced ensemble strategies, such as weighted averaging of predictions, or stacking with a more complex meta-learner than just passing probabilities to an RF, could be areas for future consideration and would demonstrate a more thorough investigation of hybrid model potential.
5. Limited Discussion on False Positives and Business Impact
The paper diligently reports False Positive Rates (FPR) for all models (CVAE: 1.77% , Bi-LSTM: 6.15% , Hybrid: 6.88% ). However, the discussion primarily focuses on recall for fraud detection as the critical metric. In real-world insurance scenarios, a high false positive rate, even if recall is improved, can lead to significant operational costs (e.g., investigating legitimate claims unnecessarily) and negative customer experiences. The discussion would greatly benefit from a more explicit analysis of the trade-off between recall and false positives from a business impact perspective. For instance, quantifying the potential costs associated with different FPRs or discussing how the models' performance might translate into actionable insights for insurance companies (e.g., prioritizing investigations, adjusting thresholds based on risk tolerance) would enhance the practical relevance of the findings.
I hope these comments are helpful in refining your manuscript. I look forward to reviewing the revised version.
Author Response
- Clarity and Professionalism (Minor Inconsistencies/Typos)
The manuscript contains several minor inconsistencies and typographical errors that detract from its overall professionalism. For instance:
The term "FI score" is used in some instances while "F1-score" is used predominantly elsewhere. Consistency is important.
The phrase "Precision Recall covers" appears to be a typo and should likely be "Precision-Recall curves".
Response:
We thank the reviewer for pointing out these important clarity and formatting issues. We have now conducted a thorough proofreading of the manuscript to ensure consistency and professional language throughout. Specifically:
- All instances of "FI score"have been corrected to "F1-score" for terminological consistency.
- The typo "Precision Recall covers"has been corrected to "Precision-Recall curves" in the appropriate section.
- Additional minor grammatical and typographical corrections have been made to improve the overall readability and presentation of the manuscript.
The abstract mentions "curated datasets of 4000 life insurance applications" while the introduction refers to an "unbalanced life insurance fraud dataset". While both might be true, consistent phrasing throughout would be beneficial.
We thank the reviewer for highlighting this inconsistency in dataset terminology. To maintain uniformity and avoid confusion, we have removed the phrase "unbalanced life insurance fraud dataset" from the introduction and ensured that the dataset is consistently referred to throughout the manuscript as a “curated dataset of 4000 life insurance applications.”
In the description of the ROC curve, "normal gaining" is used instead of the standard "random guessing." Addressing these and other similar small errors would improve the readability and academic rigor of the paper.
Response:
We appreciate the reviewer’s observation regarding terminology and clarity. We have corrected the term "normal gaining" to the standard and accurate phrase "random guessing" in the ROC curve description.
Additionally, a thorough proofreading has been conducted across the manuscript to identify and correct other minor linguistic errors and technical inconsistencies. These refinements enhance the overall readability, academic precision, and presentation quality of the manuscript
Lack of Novelty in Problem Definition
The introduction effectively highlights the significant and well-known challenges of insurance fraud detection, including increasing fraudulent claims, class imbalance, and the complexity of fraudulent behavior. While these are indeed critical issues, the paper could strengthen its argument for novelty by articulating a more specific and unique gap that this study fills beyond broadly re-stating known problems. Many studies acknowledge these difficulties; a clearer explanation of how this specific combination of CVAE, Bi-LSTM, and the hybrid model distinctively addresses a previously un(der)explored facet of life insurance fraud detection would be beneficial.
We thank the reviewer for raising this important point. In the revised manuscript, we have taken the following steps to more clearly articulate the novelty and specific research gap addressed by our study:
Unique Gap Articulated in the Literature Review (Section 2):
The revised literature survey explicitly compares our work with existing models (e.g., Random Forest, XGBoost, LSTM, VAE, CNN-LSTM) and identifies key limitations, including low recall, lack of interpretability, and the absence of latent perturbation modeling in insurance fraud contexts. This comparative table and corresponding discussion define the precise gap that our hybrid approach aims to fill.
Problem Statement Strengthened in Introduction:
The introduction now clearly presents the specific challenge of balancing temporal modeling (via Bi-LSTM), latent anomaly detection (via CVAE), and interpretability (via RF) — a combination rarely explored in life insurance fraud detection literature.
Justification for Model Combination Provided:
Both the literature review and introduction explain the rationale behind combining CVAE and Bi-LSTM with a Random Forest ensemble — demonstrating how this approach is tailored to address previously underexplored dimensions, including transparent scoring, rare-event detection, and class imbalance robustness
Limited Feature Engineering/Selection Justification
The methodology section indicates that an initial pool of 120 features was generated, and then 83 features were "finalized based on predictive importance, domain relevance and data source availability" after "feature selection and correlation analysis". While the five most predictive features are listed (Claim Amount, Credit Score, Number of Prior Claims, Policy Type, Debt-to-income ratio) , the process of feature engineering itself and a more detailed justification for why the other 37 features were excluded beyond "correlation analysis" could be more robust. Providing insights into the criteria, thresholds, or specific methodologies (e.g., recursive feature elimination, permutation importance on a baseline model) used to narrow down the feature set would enhance the credibility of the data preparation stage.
Response:
We thank the reviewer for this important suggestion. In the revised manuscript, we have added a detailed explanation of the feature selection process under the newly introduced Section 3.3 (Feature Selection and Dimensionality Reduction). This section now includes:
Use of the Chi-square test for selecting relevant categorical features.
Use of Mutual Information to evaluate the predictive strength of numerical features.
Use of Random Forest-based feature importance to rank and finalize the most significant variables.
We have also included the mathematical definitions of each method and provided a table listing the top five most predictive features. A full list of all 83 selected features, along with their descriptions, is now available in Supplementary File 2
Limited Ensemble Strategy for Hybrid Model Elaboration
The proposed hybrid RF + Bi-LSTM model is an interesting approach that combines the strengths of deep learning and ensemble methods. The core idea of concatenating Bi-LSTM's predicted probabilities with original features as input for the Random Forest is a valid strategy. However, Table 6, outlining the proposed architecture, mentions a "Fusion layer: Concentration on the majority of voting layer (MLP optimal)" , which is not clearly elaborated within the "Step-by-step Algorithms" for the hybrid model . This creates an ambiguity regarding the exact nature of the "fusion layer" and its contribution. Furthermore, given the goal of potentially improving results beyond marginal gains, exploring more advanced ensemble strategies, such as weighted averaging of predictions, or stacking with a more complex meta-learner than just passing probabilities to an RF, could be areas for future consideration and would demonstrate a more thorough investigation of hybrid model potential.
Response:
We thank the reviewer for this valuable and constructive suggestion. The following revisions have been made to improve the clarity and depth of our hybrid model presentation:
Step-by-step Hybrid Architecture Explained in Section 2.8.2:
We have clearly described the hybrid RF + Bi-LSTM workflow in Section 2.8.2. This includes the fusion mechanism, where predicted fraud probabilities from the Bi-LSTM model are concatenated with the original feature set to form an enriched input vector, which is then passed to the Random Forest classifier. This step ensures that both temporal features and static policyholder data contribute to final classification.
Clarification on the Fusion Layer in Table 6:
We have revised the description in Table 6 to remove any ambiguity related to “majority voting.” The table now clearly refers to a fusion of output probabilities and features, not a voting mechanism. This aligns with the actual structure used in the hybrid model.
- Limited Discussion on False Positives and Business Impact
The paper diligently reports False Positive Rates (FPR) for all models (CVAE: 1.77% , Bi-LSTM: 6.15% , Hybrid: 6.88% ). However, the discussion primarily focuses on recall for fraud detection as the critical metric. In real-world insurance scenarios, a high false positive rate, even if recall is improved, can lead to significant operational costs (e.g., investigating legitimate claims unnecessarily) and negative customer experiences. The discussion would greatly benefit from a more explicit analysis of the trade-off between recall and false positives from a business impact perspective. For instance, quantifying the potential costs associated with different FPRs or discussing how the models' performance might translate into actionable insights for insurance companies (e.g., prioritizing investigations, adjusting thresholds based on risk tolerance) would enhance the practical relevance of the findings.
Response:
We thank the reviewer for highlighting this important point. In the revised manuscript, we have added a new subsection (Section 4.7.2) titled “Business Impact of Recall–False Positive Trade-off.” This section provides a detailed analysis of how high false positive rates can lead to increased operational costs and negative customer experiences in real-world insurance settings
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsI think the authors addressed all my concerns.
Author Response
We sincerely thank you for your time and valuable feedback. We are pleased to know that all concerns have been addressed to your satisfaction. We appreciate your support and constructive guidance throughout the review process, which helped us significantly improve the quality of the manuscript.
Reviewer 4 Report
Comments and Suggestions for AuthorsThe authors have adequately addressed all of my previous comments. The revisions significantly improved the clarity and quality of the manuscript. I have no further concerns.
Author Response
The authors have adequately addressed all of my previous comments. The revisions significantly improved the clarity and quality of the manuscript. I have no further concerns.
Author response:
We sincerely thank you for your positive feedback and for acknowledging the improvements made to the manuscript. We truly appreciate your careful review and constructive comments, which have helped enhance the clarity and overall quality of our work.
Thank you once again for your time and support throughout the review process.