Pre-Symptomatic Detection of Nicosulfuron Phytotoxicity in Vegetable Soybeans via Hyperspectral Imaging and ResNet-18
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsTitle: clear if it is hyperspectral imagery; if the study isn't just about deep learning, make it clear on title.
Keywords: avoid using the same from title.
line 25: why that period was chosen?
line 29: specify a number/metric instead of using > (review for all text).
line 40: how the study should be applied in precision ag? since the methodology was developed under controlled conditions and in small scale?
Introduction: there is lack of the state-art related to the herbicide stress and novel technologies applied to detect it; Also make clear the real contribution of your study in relation to the studies available in literature.
line 74: use "productivity" or "yield" over the text (review for all text).
line 133: artificial neural network or neural network?
line 144: what is it "discriminative power"?
line 146: adapt your objective, no decision support system was developed or demonstrated in Results.
line 151: why those parameters were chosen for study?
line 164: is there any specification for "clear water"?
Table 1: move it after explanation.
line 181: are the images from the same period after application (1, 3, 5, 7 days)?
line 184: why those settings? eg: why 20 cm above the stage? is there any study confirming that this is the ideal distance? how the product were applied, uniformly over the leaf? in large scale, how those imagery could be taken without interference of th ground/soil?
line 217: "performance" of what?
line 232: show PCA outcomes on Results.
2.4: methods of what?
Models: what is the amount of dataset (number of samples)? was there any ground-truth or manual annotation to compare the outputs from modeling? how do you ensure there is no overfitting?
line 296: specify the number of samples for that ratio.
line 303: use past sentence (review for all text).
Table 2: was there any minimal or maximun rate/concentration of herbicide to be detect from imagery?
line 329-330: explain about those methods in methodology section.
Figure 2: increase the size of the numbers.
line 417: which algorithms?
Figure 5: in method. was specified 386-1004 nm that it isn't the same range on x-axis.
Figure 6: explain the reason to use "first-order". Also, what concentration is considered "stressed"?
line 491: specify the range of bands.
Discussion: improve it according to your findings, most part of the text isn't related to the results of the paper.
Conclusion: focus to answer the objective of the paper.
Author Response
Title: clear if it is hyperspectral imagery; if the study isn't just about deep leaming, make it clear on title.
Response: Thank you for your revision suggestions. We have changed the paper title to "Pre-Symptomatic Detection of Nicosulfuron Phytotoxicity in Vegetable Soybeans via Hyperspectral Imaging and ResNet-18". As shown in Lines 1-2 of the revised manuscript.
Keywords: avoid using the same from title.
Response: Thank you for your revision suggestions. We have changed the keywords to " Spectral range; Herbicide phytotoxicity; Early stress detection; Deep learning; Soybean-corn intercropping". As shown in Lines 45-46 of the revised manuscript.
line 25: why that period was chosen?
Response: Thanks for your kind reminders. During the early stages of herbicide treatment, soybean plants exhibited subtle phenotypic manifestations of herbicide phytotoxicity. As herbicide toxicity progressively intensified over time, visually discernible phytotoxic symptoms became apparent on leaves by Day 7 post-treatment. Consequently, we implemented extended sampling intervals to maximize phenotypic differentiation, thereby enabling robust comparative analysis of classification model accuracy. Simultaneously, to evaluate our proposed model's efficacy in early-stage phytotoxicity identification, we conducted sampling surveys at 1, 3, 5, and 7 days post-herbicide application. We provide the rationale for selecting these time points in Lines 194-200 of the revised manuscript.
line 29: specify a number/metric instead of using >(review for all text).
Response: Thanks for your kind reminders. We have checked the full text and found that the issue you mentioned exists in the abstract. We have implemented these modifications in the revised manuscript.
line 40: how the study should be applied in precision ag? since the methodology was developed under controlled conditions and in small scale?
Response: Thanks for your kind reminders. Controlled conditions provided necessary phenotyping precision for algorithm training. We have provided an explanation for the question you raised in lines 608-615 of the discussion section. Future work will focus on environmental robustness through spectral augmentation techniques simulating field variability (e.g., lighting fluctuations, leaf angle variations). We have elaborated on the study's limitations and potential applicability in the Discussion section (Lines 639-646) of the revised manuscript.
Introduction: there is lack of the state-art related to the herbicide stress and novel technologies applied to detect it; Also make clear the real contribution of your study in relation to the studies available in literature.
Response: Thanks for your kind reminders. We added new techniques for detecting herbicide stress and clarified the contribution of my research in lines 147-158 of the revised manuscript. We provide the addition of specific national techniques related to the detection of herbicide stress and a description of the research contribution.
line 74: use "productivity" or "yield" over the text (review for all text).
Response: Thanks for your kind reminders. We have checked the full text and replaced "productivity" with "yield" in lines 41, 79 and 255.
line 133: artificial neural network or neural network?
Response: Thanks for your kind reminders. We have changed "artificial neural network" to "neural network" in line 134.
line 144: what is it "discriminative power"?
Response: Thank you for your revision suggestions. "Discriminative power" refers to the ability of a model, method, or technique to distinguish between different categories or states, i.e., the ability to accurately identify and distinguish between different samples or features. In lines 161-162 we explained what exactly "discriminative power" means in the article.
line 146: adapt your objective, no decision support system was developed or demonstrated in Results.
Response: Thank you for your valuable feedback. It is true that our current research has not yet encompassed the development of a fully mature decision support system. Our primary objective is to leverage the integration of hyperspectral imaging and deep learning methodologies to establish a more precise early symptom diagnostic framework, as opposed to constructing the system itself. Consequently, all ambiguous references to system development have been removed from the original manuscript.
line 151: why those parameters were chosen for study?
Response: Thank you for your correction. In order to simulate the environmental conditions suitable for the growth of vegetables soybean, the parameters in these artificial climate chambers are set. To make it easier for the reader to understand, we've revised and added a reference in lines 170-172.
line 164: is there any specification for "clear water"?
Response: Thanks for your kind reminders. In the experiment, "clear water" refers to ordinary tap water without herbicides or other chemical additives, sourced from the municipal water supply and used as the control. It was free of visible impurities, with a natural pH range of 6.5–7.5, and was applied fresh (within 24 hours) to maintain consistency with herbicide treatment groups in irrigation volume and timing. We made modifications in lines 184-185 to explain this issue.
Table 1: move it after explanation.
Response: Thanks for your kind reminders. We have relocated Table 1 to lines 200-204.
line 181: are the images from the same period after application (1, 3, 5, 7 days)?
Response: Thank you for making this important point. Hyperspectral images were acquired at the same time points of the light cycle: all collection at 1, 3, 5 and 7 days after processing was carried out at 10 o'clock and ensured to be completed within 1 h, ensuring uniform light conditions to minimize diurnal variations in plant physiology and light intensity, which we have already added in lines 211-217 of the article.
line 184: why those settings? eg: why 20 cm above the stage? is there any study confirming that this is the ideal distance? how the product were applied, uniformly over the leaf? in large scale,how those imagery could be taken without interference of th ground/soil?
Response: Thanks for your kind reminders. We adopted the standard instrument parameters recommended for the hyperspectral imager equipped in our laboratory. Hyperspectral data acquired under these parameters have been successfully applied in non-destructive detection of tomato soluble solids content and freshness assessment of vegetable soybeans. We have inserted relevant references in line 225 of the manuscript and provided explanations for selecting these specific parameters in Lines 222-223.
line 217: "performance" of what?
Response: Thank you for the feedback. The term "performance" in line 256-257 has been clarified to specify the classification performance of machine learning models (ResNet-18 and random forest). The revision now explicitly links the preprocessing step (min-max normalization) to its impact on model metrics.
line 232: show PCA outcomes on Results.
Response: Thanks for your suggestions. We sincerely regret the absence of intuitive comparative visualizations. Due to the perishable nature of the experimental materials, supplementary data cannot be readily obtained at this stage. Following the application of random forest to screen 64 significant spectral bands, minor collinearity redundancy persisted (e.g., overlapping spectral features in adjacent bands). To address this, we applied Principal Component Analysis (PCA), mapping the 64 bands onto 32 uncorrelated principal component dimensions via orthogonal transformation. This process effectively eliminated redundancy while ensuring the retained principal components exclusively contain mutually independent and complementary spectral information. The corresponding revisions have been implemented in lines 274-279 of the manuscript.
2.4: methods of what?
Response: Thanks for your suggestion. We have revised the section heading of 2.4 to "Modeling Methods of Machine Learning and Deep Learning" (line 287).
Models: what is the amount of dataset (number of samples)? was there any ground-truth or manual annotation to compare the outputs from modeling? how do you ensure there is no overfitting?
Response: Thanks for your comments.
(1) Samples from the herbicide treatment groups (0 mL/L, 0.5 mL/L, 1 mL/L, and 2 mL/L) were collected at a 1:1:1:1 ratio across four timepoints: day 1 (238 samples), day 3 (242), day 5 (217), and day 7 (219), yielding a total of 916 samples. Ground-truth labels were manually assigned to all samples according to their respective concentrations and collection days. This explanatory note has been incorporated into lines 367-369.
(2) Each collected sample underwent manual ground-truth labeling. These annotated data constitute the essential basis for model training and evaluation, as explicitly noted in lines 335-336. Without such verified labels, critical performance metrics including precision, recall, and F1-score would be unattainable. During ResNet-18 and random forest training, the labeled data served as the definitive benchmark, providing unambiguous learning targets. Through continuous comparison between predictions and ground-truth labels, models compute loss functions (e.g., categorical cross-entropy) and backpropagate gradients to optimize parameters, thereby learning to extract discriminative features from hyperspectral data. All evaluation metrics fundamentally rely on this ground-truth reference. This clarification has been incorporated in lines 369-371.
(3) To mitigate overfitting, a phased training protocol with dynamically optimized procedures was implemented. During the initial phase, pretrained ResNet-18 weights were frozen while exclusively updating parameters in newly added layers to prevent catastrophic forgetting. Subsequently, full network fine-tuning was performed using stochastic gradient descent (SGD) with dynamic learning rate decay and early stopping mechanisms. The learning rate was reduced when validation loss plateaued for 20 consecutive epochs, with training termination triggered after 45 epochs without improvement. Categorical cross-entropy served as the loss function, with classification accuracy being the primary evaluation metric. This strategy substantially enhanced model generalization while effectively controlling overfitting. The detailed methodology has been documented in lines 311-319.
line 296: specify the number of samples for that ratio
Response: Thanks for your kind reminder. I have added extra information in lines 371 - 372. The total number of samples is 916, and the training set/test set ratio is 7:3. There are 642 samples in the training set and 274 in the test set.
line 303: use past sentence (review for all text).
Response: Thank you for the suggestion. We have carefully reviewed the entire manuscript and revised all relevant sections to ensure consistent use of the past tense, particularly in describing experimental procedures, data analysis, and results.
Table 2: was there any minimal or maximun rate/concentration of herbicide to be detect from imagery?
Response: Thank you for the inquiry. Table 2 (Experiment 1) focuses on binary classification performance (i.e., distinguishing herbicide-treated vs. control samples) rather than detecting minimal/maximal herbicide concentrations. The experimental design specifically aimed to validate the model’s ability to identify the presence/absence of herbicide in seedlings, using a dichotomous labeling scheme (herbicide group vs. clear water group). As shown in Table 2, the model achieved [X% accuracy/R²] in this binary task, demonstrating its effectiveness for initial herbicide detection.
line 329-330: explain about those methods in methodology section.
Response: Thank you for your correction. We have revised the presentation of both the Jaccard Similarity Index and Confusion Density within this section, with modifications now incorporated in lines 320-336. These modifications enhance methodological rigor and improve the overall comprehensibility of our research framework. We sincerely appreciate your valuable insights on this matter.
Figure 2: increase the size of the numbers.
Response: Thank you for your meticulous suggestion. We have enhanced the visual clarity of Figure 2 through dual modifications: (1) enlarging numerical fonts and (2) implementing context-aware color optimization—utilizing white numerals against dark backgrounds and black numerals against light backgrounds. This color-contrast strategy significantly improves data legibility. The revision ensures all critical figure elements (including but not limited to axis scales and legend values) are distinctly discernible, thereby optimizing the chart's data interpretability. In response to your valuable feedback on Figure 2, we have applied the same enhancement protocol to Figures 3 and 4.
line 417: which algorithms?
Response: Thank you for your careful review. In line 421, the algorithms referred to are explicitly stated in the figure caption and corresponding text: Subfigures (a)–(d) utilize the Random Forest (RF) algorithm, while Subfigures (e)–(h) employ the ResNet-18 deep learning model. This distinction is further clarified in Section 3.1.1 (Experiment 1) and the associated confusion matrix descriptions, where the performance of RF and ResNet-18 is analyzed separately across different days. Please let us know if further clarification is needed.
Figure 5: in method. was specified 386-1004 nm that it isn't the same range on x-axis.
Response: Thank you for your careful observation! To clarify: The x - axis in Figure 5 displays the visually prominent spectral range (400–1000 nm) for readability, as the key spectral variations occur within this interval. However, the actual hyperspectral data were acquired over the full range of 386–1004 nm (consistent with Section 2.2 “Data Acquisition” in the methodology). The slight extension beyond 400 nm and 1000 nm on the axis is a default rendering by the plotting tool (no data exists outside 386–1004 nm) and does not affect the reported spectral range.
Figure 6: explain the reason to use "first-order". Also, what concentration is considered 'stressed"?
Response: Thank you for your questions on Figure 6. The first - order derivative was applied because it calculates the reflectance change rate (slope) between adjacent wavelengths, effectively eliminating baseline drift, light scattering, and background noise in raw spectra while highlighting subtle spectral feature variations.We explain this in detail in lines 563-568.
For “stressed” concentrations, based on our experimental design with four gradients (0 as control, 0.5, 1, 2 ml/L) for nicosulfuron - induced physiological damage in vegetable soybean (Glycine max L.) seedlings: concentrations ≥ 0.5 ml/L are defined as “stressed” (0.5–1 ml/L as “low - stress” and 2 ml/L as “high - stress” in three - class tasks, with all non - zero concentrations belonging to stressed categories). We have clarified these in the figure caption and relevant sections for better transparency.
line 491: specify the range of bands.
Response: Thank you for your question, regarding the range of wavelengths, it has been explained in the 3.2 Spectrum analysis section at the beginning of the article that the wavelength of 513nm~690nm is selected in this study, that is, 135 spectral bands. Please let us know if further adjustments are needed.
Discussion: improve it according to your findings, most part of the text isn't related to the results of the paper.
Response: Thank you for your valuable feedback on the Discussion section. We highly appreciate your guidance in ensuring the manuscript's scientific rigor. We have carefully revised the Discussion to tightly align with the study's results, eliminating content unrelated to our findings and reinforcing connections between spectral analysis, model performance, and practical implications. This revision enhances the logical coherence between methods, results, and discussion, ensuring the section accurately reflects the study’s contributions. Please let us know if further adjustments are needed.
Conclusion: focus to answer the objective of the paper
Response: Thank you for your feedback. The conclusions have been refined to demonstrate precise alignment with the paper's stated objectives. The revised text delineates how the study results validate the method's accuracy, efficiency, and practical applicability, while underscoring its distinct methodological advantages. This establishes explicit linkages between research outcomes and original study goals.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsDear authors,
The manuscript is relevant and presents an interesting approach. However, I have suggestions for improving the study and text:
- The idea of ​​the keywords is to complement the title. I recommend not repeating the terms or keeping a maximum of two keywords. Example to be inserted: ResNet-18, precision agriculture… I suggest inserting words that are not in the title.
- The first sentences of the introduction: “China is the largest importer of soybeans in the world, with an external dependence of 81%. Imports of corn, which is grown in the same season as soybeans, account for only 1.8% of total demand” need to be referenced. Since the references are production numbers, it needs to show where this information came from.
- Correct a typo in line 46: “word” should be “world.
- Line 66: would it be reference [6]? If it is 1, you need to check if the others follow the order, reference 6 does not appear.
- I was not able to find some of the references, especially the Chinese regional publications marked as “(in Chinese)” (e.g., Wu Z.G. 2004; Gai J.Y. et al. 2002 no. 1), in academic databases. I recommend including the original Chinese title, a link, or additional bibliographic details to help ensure the references can be verified by readers.
- I suggest make the objective more explicit, in a single sentence.
- The methodology presents many details, for better understanding, I suggest creating a flowchart showing all the steps.
- It is unclear the number and organization of hyperspectral samples used for model training and evaluation. It is unclear whether a single mean spectrum per leaf, all individual spectra, or averages per treatment group were used.
- The manuscript does not report the hyperparameters used for RF and ResNet-18 models. Were any parameters changed? Was any validation strategy performed?
- The manuscript reports 100% of precision, recall, and F1-score for certain classification scenarios, particularly for ResNet-18 and RF models on Day 7. While impressive, such perfect metrics are uncommon in real-world spectral data and may raise concerns regarding potential overfitting, especially in the absence of cross-validation or an external test set. I suggest clarify whether these results were obtained on an independent test set and whether any validation strategy (e.g., cross-validation) was implemented to mitigate overfitting.
- The discussion section is relatively small. I suggest include a more detailed comparison with similar studies in the literature (or show how this study filled the gap of others) and show how the findings could be applied in real-world field settings.
Author Response
Point-by-Point Responses to Reviewer 2's Comments
- The idea of the keywords is to complement the title. I recommend not repeating the terms or keeping a maximum of two keywords. Example to be inserted: ResNet-18, precision agriculture… I suggest inserting words that are not in the title.
Response: Thank you for your suggestions. We have re-refined and revised the keywords of this article, which can be found in lines 46-47 of the revised manuscript.
- The first sentences of the introduction: “China is the largest importer of soybeans in the world, with an external dependence of 81%. Imports of corn, which is grown in the same season as soybeans, account for only 1.8% of total demand” need to be referenced. Since the references are production numbers, it needs to show where this information came from
Response: Thank you for your corrections. We have rechecked and corrected the relevant data, and added references containing the data sources. Please refer to lines 51 and 52 of the revised manuscript.
- Correct a typo in line 46: “word” should be “world
Response: Thank you for your correction. I have corrected this mistake in line 50.
- Line 66: would it be reference [6]? If it is 1, you need to check if the others follow the order, reference 6 does not appear.
Response: Thank you for the correction. We have rechecked and corrected the order of the references.
- I was not able to find some of the references, especially the Chinese regional publications marked as “(in Chinese)” (e.g., Wu Z.G. 2004; Gai J.Y. et al. 2002 no. 1), in academic databases. I recommend including the original Chinese title, a link, or additional bibliographic details to help ensure the references can be verified by readers.
Response: Thank you for your corrections. We carefully checked each reference and identified instances of mistranslated titles. Currently, we have provided correct titles for all entries in the References section and ensured that they can be retrieved through academic search engines. For those references that can be replaced, we have also replaced them with more easily searchable ones.
- I suggest make the objective more explicit, in a single sentence.
Response: Thanks for your kind reminders. To make the research objective of this paper clearer, we changed the phrase "To address this gap" at line 21 of the abstract to "To develop and validate a spectral-feature-based prediction model for herbicide concentration classification".
- The methodology presents many details, for better understanding, I suggest creating a flowchart showing all the steps.
Response: Thank you for the suggestion. We have added a flow chart of the experiment in this study and provided explanations for the content of the figure. Please refer to lines 364-381 of the revised manuscript for details.
- It is unclear the number and organization of hyperspectral samples used for model training and evaluation. It is unclear whether a single mean spectrum per leaf, all individual spectra, or averages per treatment group were used.
Response:Thank you for your question. We have clarified the number of experimental samples: 238 samples were collected on Day 1, 242 on Day 3, 217 on Day 5, and 219 on Day 7, with a total of 916 samples, and these changes are in lines 388 to 393 . As stated in lines 237-238, we used the average spectrum from each leaf's region of interest (ROI) rather than individual spectra per leaf or averages per treatment group.
- The manuscript does not report the hyperparameters used for RF and ResNet-18 models. Were any parameters changed? Was any validation strategy performed?
Response:Thank you for your valuable comments. In this study, the hyperparameters of the random forest (RF) and ResNet-18 models were not adjusted, and default settings were adopted; additionally, no cross-validation strategy was employed. This is primarily because the focus of this research lies in exploring the application potential of hyperspectral imaging technology and deep learning models (particularly ResNet-18) in detecting early herbicide stress in vegetable soybeans, as well as comparing their performance with that of the random forest model—rather than optimizing the hyperparameters of the models themselves. Our aim was to first verify the basic performance of these models in this specific scenario.
Meanwhile, no cross-validation experiments were conducted in this study. The main reasons are the constraints of time and resources, along with the initial exploratory nature of the experimental design, which led us to choose a simpler validation method: a one-time evaluation of the models using an independent test set.
Nevertheless, we recognize the significance of hyperparameter tuning and cross-validation for evaluating model performance. In future research, we plan to further optimize the experimental design, systematically adjust and optimize hyperparameters, and adopt more comprehensive validation strategies such as cross-validation to more accurately assess the performance and generalization ability of the models.
- The manuscript reports 100% of precision, recall, and F1-score for certain classification scenarios, particularly for ResNet-18 and RF models on Day 7. While impressive, such perfect metrics are uncommon in real-world spectral data and may raise concerns regarding potential overfitting, especially in the absence of cross-validation or an external test set. I suggest clarify whether these results were obtained on an independent test set and whether any validation strategy (e.g., cross-validation) was implemented to mitigate overfitting.
Response:Thank you for your valuable comments. Regarding your concerns about overfitting, we have detailed the steps taken to prevent overfitting in lines 314-322 of the article. Specifically, we adopted a staged training and dynamic optimization strategy to address the overfitting issue. During the initial training phase, we froze the pre-trained ResNet-18 weights and only updated the parameters of the new layer to prevent "catastrophic forgetting". Subsequently, we fine-tuned the weights using a standard SGD optimizer, while implementing dynamic learning rate decay and early stopping strategies: if the validation loss did not improve within 20 epochs, the learning rate was reduced; if no improvement was observed within 45 epochs, training was halted. We used categorical cross-entropy as the loss function and accuracy as the primary evaluation metric. These measures have effectively curbed overfitting and enhanced the model's generalization ability.
- The discussion section is relatively small. I suggest include a more detailed comparison with similar studies in the literature (or show how this study filled the gap of others) and show how the findings could be applied in real-world field settings.
Response:Thank you for your valuable suggestions. Compared with similar studies in the literature, we note that some have focused on herbicide stress detection. For instance, Xiao et al. utilized leaf hyperspectral images, SPAD values, water content, and the HerbiNet model to predict herbicide stress level classifications. However, their model relied on complex multi-branch networks or full-band data, resulting in high computational costs and poor adaptability to portable field devices. Farber et al. explored herbicide stress detection using Raman spectroscopy, but this method requires manual peak analysis and lacks automation. In our study, we employed the lightweight ResNet-18 architecture, replacing traditional 2D convolutional layers with 1D convolutional layers to directly process 1D spectral data. This approach effectively reduced computational costs, lowered hardware requirements, and enhanced the model’s applicability to portable field equipment. We have added a note on this in the discussion section (lines 656-669).
Regarding practical field applications, while large-scale field trials have not been conducted in this study, we performed systematic monitoring experiments on vegetable soybeans in a greenhouse environment. We cultivated and monitored the vegetable soybeans under conditions similar to actual field environments to ensure the experimental results have a certain degree of transferability. By optimizing the model architecture and band selection, we improved the model’s operational efficiency and lightweight design, providing technical support for the development of low-cost, low-power field herbicide stress early warning devices. We analyzed and explained the application potential of integrating hyperspectral imaging with deep learning in field crop stress monitoring in lines 629-656 of the discussion section. Additionally, our identification and analysis of key spectral bands lay the groundwork for further applying this technology in real-world field environments. In future research, we plan to implement this technology in small-scale field trials to verify its effectiveness in complex real-world settings.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsMinor revision. Is it the paper on template of the periodic? citation on conclusion isn't relevant.
Author Response
Comments 1: Minor revision. Is it the paper on template of the periodic? citation on conclusion isn't relevant.
Response 1: Thank you very much for pointing out the citation issue in our manuscript. We sincerely appreciate your careful review and valuable feedback. Upon double-checking the conclusion section (specifically line 709), we confirm that references [37] and [38] were indeed inappropriately cited in this context. We have now removed these two references from the conclusion and ensured all remaining citations strictly align with our research content. This revision has been implemented in the updated manuscript (Line 709 in revised version).
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors have addressed all the comments and suggestions made in the previous review.
Author Response
Comments 1: The authors have addressed all the comments and suggestions made in the previous review.
Response 1: Thank you for your confirmation that we have addressed all previous comments. We sincerely appreciate the thorough and constructive feedback provided throughout the review process, which has significantly strengthened our manuscript.