IntegraPSG: Integrating LLM Guidance with Multimodal Feature Fusion for Single-Stage Panoptic Scene Graph Generation
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsSummary of the article under consideration: The target research problem in the article is Panoptic Scene Graph Generation (PSG). The authors point out that existing PSG methods rely heavily on visual features. As a result, these methods struggle to predict rare relations due to the long-tail distribution. To address this, the authors propose IntegraPSG. It is a single-stage framework that combines multimodal cues (visual, language, and depth features) with semantic guidance. It includes a network that predicts fewer but more meaningful relationships between objects in a scene. It also uses a decoder guided by large language models to improve its understanding of those relationships. IntegraPSG is evaluated against existing models using key metrics.
While the article is well-written, in general, the following comments must be addressed in the revised version of the article.
- The abstract should better highlight the novelty of the proposed solution. While it is mentioned in the abstract that IntegraPSG shows steady performance and improves long-tail relation prediction. The words “steady performance” or “improves” do not provide a clear understanding about the significance of achieved results. In other words, the abstract should include specific results. For example, it can mention exact improvements in mR@K and R@K scores. It should also explain how LLM guidance and multimodal fusion contribute to these gains.
- In the introduction section, the text between lines 58 and 120 must be reorganized in a more logical way.
- First, the limitations of existing PSG methods should be discussed in a dedicated paragraph.
- Then define what is needed to address these limitations (such as integration of multimodal features to enrich relational reasoning, mechanisms to filter and prioritize meaningful subject-object pairs, semantic guidance to improve recognition of rare and context-specific relations, etc.).
- The subsequent paragraph should introduce IntegraPSG as a unified single-stage framework that is designed to meet these needs. Outline its three core components (Panoptic Segmentation Network, Multimodal Sparse Relation Prediction Network, and Multimodal Relation Decoder). Explain how the proposed solution addresses the limitations.
- There must be a dedicated paragraph that should summarize the validation strategy as well as achieved outcomes (such as extensive experiments conducted on the PSG dataset, evaluation using SGDet task with metrics like R@K, mR@K, and mean, comparative analysis against state-of-the-art methodsc.).
- Section 2 reviews related works on PSG. It is recommended to include a comparative table summarizing key characteristics of existing PSG methods. This table may contain method name, core architecture (such as transformer-based, dual-decoder, prototype learning), relation modeling strategy (such as query matching, semantic alignment, attention mechanisms), strengths and limitations. This table may provide readers with a concise and structured overview of state-of-the-art on PSG. It will allow readers to understand how the proposed method differentiates itself and addresses the limitations of prior approaches.
- Section 3 clearly explains the proposed method. The structure as well as behavior of the proposed method is clear from the provided descriptions.
- Section 4 presents an evaluation of the IntegraPSG framework on the PSG dataset. It has been compared with state-of-the-art methods using Recall@K (R@K), mean Recall@K (mR@K), and an aggregated Mean score. While the authors provide an evaluation of IntegraPSG’s performance, an important shortcoming is the limited exploration of failure cases and error analysis. It is particularly true for the model’s handling of rare or ambiguous relations. Although authors have acknowledged the moderate mRecall@K scores and attribute them to long-tail distribution challenges, they do not provide specific examples where the model misclassifies or fails to rank rare relations accurately. This shortcoming makes it harder to identify architectural or data-driven limitations.
Author Response
Response to reviewer#1:
Thank you for your time and thoughtful comments, which have greatly contributed to improving our manuscript. Your constructive feedback was highly valuable. We have carefully considered each of your suggestions and have revised the manuscript accordingly. Below, we provide a detailed response to each of your comments.
Comments 1: The abstract should better highlight the novelty of the proposed solution. While it is mentioned in the abstract that IntegraPSG shows steady performance and improves long-tail relation prediction. The words “steady performance” or “improves” do not provide a clear understanding about the significance of achieved results. In other words, the abstract should include specific results. For example, it can mention exact improvements in mR@K and R@K scores. It should also explain how LLM guidance and multimodal fusion contribute to these gains.
Response 1: Thank you for your helpful comments and suggestions. We have revised the abstract to include quantitative evidence—including R@K, mR@K, and Mean scores—to clearly demonstrate the strong and competitive performance of IntegraPSG (lines 16 to 18 of the revised manuscript). In addition, we clarify in the abstract how the integration of LLM guidance and multimodal feature fusion contributes to these gains (lines 7 to 15). All changes are marked in red for your convenience in the revised manuscript.
Comments 2: In the introduction section, the text between lines 58 and 120 must be reorganized in a more logical way.
- First, the limitations of existing PSG methods should be discussed in a dedicated paragraph.
- Then define what is needed to address these limitations (such as integration of multimodal features to enrich relational reasoning, mechanisms to filter and prioritize meaningful subject-object pairs, semantic guidance to improve recognition of rare and context-specific relations, etc.).
- The subsequent paragraph should introduce IntegraPSG as a unified single-stage framework that is designed to meet these needs. Outline its three core components (Panoptic Segmentation Network, Multimodal Sparse Relation Prediction Network, and Multimodal Relation Decoder). Explain how the proposed solution addresses the limitations.
- There must be a dedicated paragraph that should summarize the validation strategy as well as achieved outcomes (such as extensive experiments conducted on the PSG dataset, evaluation using SGDet task with metrics like R@K, mR@K, and mean, comparative analysis against state-of-the-art methodsc.).
Response 2: Thank you for exceptionally insightful suggestions. We fully agree that restructuring the introduction according to your detailed roadmap significantly enhances the logical flow and clarity of our manuscript. This was an invaluable piece of feedback. In direct response to your points, we have thoroughly revised the introduction section as follows:
- We added a dedicated paragraph(line 58 to 65) that exclusively discusses the core limitations of existing PSG methods, focusing on challenges in spatial reasoning and the long-tail distribution of relations.
- We have added a new paragraph(line 66 to 82) that defines what is needed to address these challenges, precisely as you suggested.
- We then introduce our IntegraPSG framework (lines83 to 102), systematically explaining how its three core components are specifically designed to meet these needs and address the previously outlined limitations.
- We have added a new, dedicated paragraph (lines118 to 126) at the end of the introduction to summarize our validation strategy, experimental setup, and key outcomes, highlighting our model's competitive performance.
We believe this new structure now presents our motivation and contribution in a much clearer and more compelling manner. All changes are marked in red for your convenience in lines 58 to 126 of the revised manuscript.
Comments 3: Section 2 reviews related works on PSG. It is recommended to include a comparative table summarizing key characteristics of existing PSG methods. This table may contain method name, core architecture (such as transformer-based, dual-decoder, prototype learning), relation modeling strategy (such as query matching, semantic alignment, attention mechanisms), strengths and limitations. This table may provide readers with a concise and structured overview of state-of-the-art on PSG. It will allow readers to understand how the proposed method differentiates itself and addresses the limitations of prior approaches.
Response 3: Thank you for this helpful suggestions. We have added a comparative Table 1 summarizing key characteristics of existing PSG methods, including core architecture, relation modeling strategies, and their main strengths and limitations. This table provides readers with a concise overview of the current PSG methods. The title of Table 1 is marked in red in the revised manuscript
Comments 4: Section 3 clearly explains the proposed method. The structure as well as behavior of the proposed method is clear from the provided descriptions.
Response 4: We sincerely thank you for recognizing the clarity of Section 3 (Method). Your comments encourage us to continue prioritizing clarity and readability throughout the manuscript.
Comments 5: Section 4 presents an evaluation of the IntegraPSG framework on the PSG dataset. It has been compared with state-of-the-art methods using Recall@K (R@K), mean Recall@K (mR@K), and an aggregated Mean score. While the authors provide an evaluation of IntegraPSG’s performance, an important shortcoming is the limited exploration of failure cases and error analysis. It is particularly true for the model’s handling of rare or ambiguous relations. Although authors have acknowledged the moderate mRecall@K scores and attribute them to long-tail distribution challenges, they do not provide specific examples where the model misclassifies or fails to rank rare relations accurately. This shortcoming makes it harder to identify architectural or data-driven limitations.
Response 5: We sincerely thank you for this insightful comments. Your suggestions have been extremely helpful in improving the clarity and completeness of our manuscript. In the revised manuscript, we have added a dedicated subsection presenting qualitative analyses of representative success and failure cases in Section 4.5 of the Experiments section (lines 627 to 665). Figures 10 and 11 illustrate these cases and we specifically highlight rare and ambiguous relations that were misclassified or missed by IntegraPSG, such as <person, crossing, road> and <person, looking at, bicycle>, alongside correctly predicted but unannotated triplets. This addition provides a clearer understanding of the architectural and data-driven challenges underlying moderate mR@K scores. All changes are marked in red in the revised manuscript.
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for Authors- The author refers to the term 'single-stage' in the paper title, but the definition of this term is not clear in the paper. It appears that there is an absence of reflection or verification in the subsequent methods or experiments.
- The subjects, objects, and class labels extracted from the basis for subsequent research are outlined below. The paper employs the Mask2Former method for processing. It is recommended that the advantages of this method and the reasons for choosing it be explained in the paper.
- LLM appears to be designed to provide richer and more detailed spatial relationship predicates. However, how is LLM trained using prompts? It is imperative that the author provides a more detailed description of the role of LLM and the manner in which it should be cited in Figure 2.
- The PSG dataset is a significant externally introduced database. It is recommended that the structure and content of the PSG dataset be described in the paper, and that the manner in which it is specifically integrated with the model be explained. For example, what are the 56 relationship categories in the PSG dataset? Were all of them used? This will help readers understand.
- It is recommended that a comparison be made with existing methods in terms of inference efficiency and the computational resources required. It is recommended that the comparison be supplemented by additional explanations.
- As demonstrated in Section 4.3, Figures 6 and Table 1 illustrate that the mR@K scores of IntegraPSG (22.3%, 26.3%, 28.6%) decrease as k decreases, and the performance discrepancy with other models gradually widens. It is recommended that a more thorough analysis be conducted to ascertain the underlying reasons for this phenomenon.
- Line 371 contains a spelling error: “pur approach”.
Author Response
Response to reviewer#2:
Thank you very much for your valuable time and careful review. Your comments and suggestions have provided significant guidance in improving our manuscript. We have addressed all the concerns you raised and made the corresponding revisions. Please find our detailed responses to each comment below.
Comments 1: The author refers to the term 'single-stage' in the paper title, but the definition of this term is not clear in the paper. It appears that there is an absence of reflection or verification in the subsequent methods or experiments.
Response 1: Thank you for your helpful comments and suggestions. We have clarified in the Introduction section that IntegraPSG is single-stage in that panoptic segmentation and relation prediction are jointly performed within a unified network, optimized end-to-end via a single joint objective (lines 84 to 86). In the Methods section, we further emphasize that all components are optimized jointly under a weighted loss, making segmentation differentiable and co-trainable with relation prediction (lines 433 to 436). Finally, in the Experiments section, we verify this design by comparing IntegraPSG with two-stage (VCTree) baseline, showing consistent improvements across all metrics (lines 533 to 535). All changes are marked in red in the revised manuscript.
Comments 2: The subjects, objects, and class labels extracted from the basis for subsequent research are outlined below. The paper employs the Mask2Former method for processing. It is recommended that the advantages of this method and the reasons for choosing it be explained in the paper.
Response 2: Thank you for this valuable comments and suggestions. We agree that clarifying the rationale for our methodological choice is important. In the revised manuscript, we have now incorporated a discussion on the key advantages of the Mask2Former method and our reasons for selecting it (lines 166 to 172). All changes are marked in red in the revised manuscript for easy review.
Comments 3: LLM appears to be designed to provide richer and more detailed spatial relationship predicates. However, how is LLM trained using prompts? It is imperative that the author provides a more detailed description of the role of LLM and the manner in which it should be cited in Figure 2.
Response 3: Thank you for your insightful comments and suggestions. We have revised the Language Prompt Features Extraction subsection to detail the role and usage of the LLM, clarifying that it is not trained but is instead utilized in a one-time, offline process to build a knowledge base of pre-computed features. In addition, we now explain how these features are retrieved during online training to clarify the data flow depicted in Figure 2. The corresponding revisions can be found in Section 3.5.2 of the Method section (lines 360 to 384). All changes are marked in red in the revised manuscript.
Comments 4: The PSG dataset is a significant externally introduced database. It is recommended that the structure and content of the PSG dataset be described in the paper, and that the manner in which it is specifically integrated with the model be explained. For example, what are the 56 relationship categories in the PSG dataset? Were all of them used? This will help readers understand.
Response 4: Thank you for your valuable comments to provide more details about the PSG dataset. We agree that this information is crucial for readers' understanding. In the revised manuscript, we have addressed your specific points as follows: To describe the structure and content of the PSG dataset, we have enriched the Dataset paragraph by clarifying its foundation in panoptic segmentation and scene graphs (lines 481 to 482). To answer what the 56 relationship categories are, we have introduced Table 2 which provides a complete list, and have also explicitly stated that, yes, all of them were used for training and evaluation (lines 485 to 487). Regarding the manner in which the PSG dataset is specifically integrated with our model, these details are provided in our Method section, as this integration is deeply tied to the technical design of our model's components. This section details its application in building the statistical prior matrix (lines 294 to 295) and its role in constructing the language knowledge base as illustrated in Figure 4(b) (lines 374 to 380). All changes are marked in red in the revised manuscript for easy review.
Comments 5: It is recommended that a comparison be made with existing methods in terms of inference efficiency and the computational resources required. It is recommended that the comparison be supplemented by additional explanations.
Response 5: We sincerely thank you for valuable comments and suggestions. We have added a comparison of inference efficiency with existing methods, as shown in Table 3, along with additional explanations to clarify the results (lines 568 to 577). As the other methods do not report computational resource metrics and due to differences in experimental settings, we do not directly measure their computational resource usage; however, inference speed provides a practical and informative proxy for evaluating efficiency. These updates are marked in red in the revised manuscript.
Comments 6: As demonstrated in Section 4.3, Figures 6 and Table 1 illustrate that the mR@K scores of IntegraPSG (22.3%, 26.3%, 28.6%) decrease as k decreases, and the performance discrepancy with other models gradually widens. It is recommended that a more thorough analysis be conducted to ascertain the underlying reasons for this phenomenon.
Response 6: Thank you for this insightful comments. We agree that a deeper analysis of this phenomenon is warranted. To address your suggestions, we have substantially revised the paragraph to provide a thorough analysis of the underlying reasons for the observed mR@K performance. Our new analysis explains that the dense distribution of confidence scores causes common relations and some false positives to often outrank correct but infrequent relations, an effect that is magnified at smaller k values (lines 556 to 563). All changes are marked in red in the revised manuscript.
Comments 7: Line 371 contains a spelling error: “pur approach”.
Response 7: Thank you for your careful reading and for pointing out this typo. We have corrected "pur approach" to "our approach". This change appears in line 386 of the revised manuscript.
Author Response File: Author Response.docx
Reviewer 3 Report
Comments and Suggestions for AuthorsThis manuscript proposes a unified single-stage Panoptic Scene Graph Generation (PSG) method named “IntegraPSG,” which integrates large language model (LLM) guidance with multimodal feature fusion to address spatial reasoning and long-tail distribution issues in PSG.
1) The manuscript states that Seesaw Loss is used to handle the long-tail problem. The authors should compare it against other standard losses. Although the manuscript claims to solve the “long-tail” issue, in practice it mainly relies on LLM-generated textual descriptions and does not fundamentally address data imbalance. Why not adopt more direct remedies such as re-sampling, loss re-weighting, or data augmentation? Is this “language guidance” truly more effective than conventional long-tail treatments? The authors need to further substantiate the effectiveness of the proposed approach by adding ablation studies that compare the methods used here with alternative choices, and quantify the independent contribution of each component.
2) The proposed approach combines multimodal feature fusion with LLM prompts, which would, in principle, substantially increase computation and memory cost at inference time. However, the experiments do not report inference speed, GPU memory usage, or hardware requirements. Please provide quantitative analyses of inference efficiency and resource consumption.
In summary, the current experimental evidence is insufficient to fully justify the method; a major revision is required.
Author Response
Response to reviewer#3:
Thank you for your valuable time and insightful comments, which have significantly improved our manuscript. We appreciate the constructive feedback provided.We have carefully addressed all the raised concerns and revised our manuscript accordingly. Below please find the detailed modifications to each of your comments.
Comments 1: The manuscript states that Seesaw Loss is used to handle the long-tail problem. The authors should compare it against other standard losses. Although the manuscript claims to solve the “long-tail” issue, in practice it mainly relies on LLM-generated textual descriptions and does not fundamentally address data imbalance. Why not adopt more direct remedies such as re-sampling, loss re-weighting, or data augmentation? Is this “language guidance” truly more effective than conventional long-tail treatments? The authors need to further substantiate the effectiveness of the proposed approach by adding ablation studies that compare the methods used here with alternative choices, and quantify the independent contribution of each component.
Response 1: Thank you for these insightful and constructive comments. We agree that substantiating our approach with direct comparisons is crucial. We have incorporated two key ablation studies into the revised manuscript. First, to directly compare our language-guidance method against conventional long-tail treatments and to quantify the LLM's independent contribution, we have added a comprehensive analysis featuring Re-sampling and Loss re-weighting (Table 6, and lines 601 to 613). Second, to justify our use of Seesaw Loss, we have included a new comparison against standard losses like Focal Loss (Table 7, and lines 618 to 626). These additions are designed to fully substantiate the effectiveness and component contributions of our proposed approach. All changes are marked in red for your convenience.
Comments 2: The proposed approach combines multimodal feature fusion with LLM prompts, which would, in principle, substantially increase computation and memory cost at inference time. However, the experiments do not report inference speed, GPU memory usage, or hardware requirements. Please provide quantitative analyses of inference efficiency and resource consumption.
Response 2: Thank you for your helpful comments and suggestions. We have added a quantitative analysis of inference efficiency and computational cost, including inference time and GPU memory usage, to the revised manuscript (lines 568 to 577). All changes are marked in red.
Author Response File: Author Response.docx
Reviewer 4 Report
Comments and Suggestions for AuthorsExcellent work — the study is rich in content and the data analysis is sufficient, supporting the novelty of the proposed IntegraPSG method. I have two suggestions for improvement:
1.The manuscript contains a large number of mathematical formulas. Please carefully check all equations and ensure that every variable and notation is explicitly defined in the text.
2.Please add a Discussion section that highlights the current limitations of the study and outlines directions for future research.
Author Response
Response to reviewer#4:
We greatly appreciate your time and insightful feedback, which has played a key role in enhancing our manuscript. Your constructive suggestions were extremely helpful. We have thoroughly addressed all the points raised and revised the manuscript accordingly. Detailed responses to each of your comments are provided below.
Comments 1: The manuscript contains a large number of mathematical formulas. Please carefully check all equations and ensure that every variable and notation is explicitly defined in the text.
Response 1: We thank you for this valuable comments and suggestions. In response to your comment, we have conducted a comprehensive review of the entire manuscript to ensure that all variables and notations are explicitly defined upon their first use or immediately following the corresponding equation. As these clarifications have been integrated throughout the text, we have not itemized the specific line numbers. We are confident that these revisions have substantially enhanced the manuscript's clarity and rigor.
Comments 2: Please add a Discussion section that highlights the current limitations of the study and outlines directions for future research.
Response 2: Thank you very much for your valuable suggestions, which have greatly helped to enhance the clarity and completeness of our manuscript. We have added a new Discussion section highlighting the current limitations of IntegraPSG and outlining directions for future research. In the revised manuscript, the section title is marked in red Discussion to indicate the newly added content, while the main text has been fully updated but is not individually highlighted. This section is placed immediately before the Conclusion and provides a detailed discussion of the model’s limitations, including language-guided bias, causal reasoning constraints, and challenges posed by the long-tail distribution, as well as potential avenues for future improvements (lines 671 to 682).
Author Response File: Author Response.docx
Reviewer 5 Report
Comments and Suggestions for Authors1. Smoother Read: Some parts, especially in the method section, get a bit dense. Long sentences packed with technical terms can be tough to follow. Breaking these down into shorter, step-by-step explanations would make it much easier for everyone to grasp your cool ideas. Think "guide the reader" rather than "state the facts."
2. Clearer Figures (7 & 8): The info in Figures 7 and 8 is useful, but they're a bit crowded! Overlapping labels and tiny text make them hard to decipher. Simplifying the layout, using bigger fonts, and maybe reducing some annotation clutter would make their message pop instantly.
3. Demystify the Math: The multimodal fusion part (especially those weighting parameters – λ and friends) is crucial. Could you add a sentence or two explaining why they matter and how you chose their values? A simple, intuitive explanation alongside the math would help readers connect the equations to what's actually happening.
4. Show, Don't Just Tell (LLM Impact): The LLM prompts are a standout feature! To really sell it, could you add a couple of clear examples? Show side-by-side: "Here's what the model predicted without the LLM prompt, and here's the better prediction with it." This makes the benefit concrete.
5. Where it Stumbles (Briefly): A short, honest look at where IntegraPSG still gets things wrong would be valuable. What kinds of scenes or relationships trip it up? This isn't a weakness – it shows maturity, helps others build on your work, and guides future fixes.
6. Refs are Solid! Excellent job on the references – they cover all the right bases and show you know the field inside out.
Author Response
Response to reviewer#5:
We sincerely appreciate the time and thoughtful feedback you have provided, which have greatly contributed to improving our manuscript. Your constructive suggestions were extremely valuable. We have carefully considered all the issues raised and made corresponding revisions. Below, we provide a detailed response to each of your comments.
Comments 1: Smoother Read: Some parts, especially in the method section, get a bit dense. Long sentences packed with technical terms can be tough to follow. Breaking these down into shorter, step-by-step explanations would make it much easier for everyone to grasp your cool ideas. Think "guide the reader" rather than "state the facts."
Response 1: We sincerely thank you for this valuable suggestions. In the revised manuscript, we have carefully revised the Method section to improve readability. Long and complex sentences have been broken down into shorter, step-by-step explanations, with technical terms introduced more gradually to guide the reader through our approach. We believe these changes make the presentation of our methodology clearer and easier to follow. All changes are marked in red.
Comments 2: Clearer Figures (7 & 8): The info in Figures 7 and 8 is useful, but they're a bit crowded! Overlapping labels and tiny text make them hard to decipher. Simplifying the layout, using bigger fonts, and maybe reducing some annotation clutter would make their message pop instantly.
Response 2: We sincerely thank you for this helpful suggestions. In the revised manuscript, we have enlarged the fonts and adjusted the sizes of Figures 10 and 11 to improve readability. We have also added a new Section 4.5, which provides analysis of selected cases from these figures, extracting valuable information to facilitate readers’ understanding.
Comments 3: Demystify the Math: The multimodal fusion part (especially those weighting parameters – λ and friends) is crucial. Could you add a sentence or two explaining why they matter and how you chose their values? A simple, intuitive explanation alongside the math would help readers connect the equations to what's actually happening.
Response 3: Thank you for valuable comments and suggestions. We have added a brief, intuitive explanation (lines 306 to 308), clarifying the role of the weighting parameters (α, β) in balancing contributions from different modalities. We also specify that these parameters are learned end-to-end during training, allowing the network to automatically adjust their values to optimize feature integration for the PSG task. These additions aim to help readers better connect the mathematical formulation with its practical effect in our model. All changes are marked in red in the revised manuscript.
Comments 4: Show, Don't Just Tell (LLM Impact): The LLM prompts are a standout feature! To really sell it, could you add a couple of clear examples? Show side-by-side: "Here's what the model predicted without the LLM prompt, and here's the better prediction with it." This makes the benefit concrete.
Response 4: We sincerely thank you for this helpful suggestions. To demonstrate the impact of LLM guidance, we provide side-by-side visualizations of predictions with and without LLM prompts in Figures 7–9, highlighting the improvement brought by language information.
Comments 5: Where it Stumbles (Briefly): A short, honest look at where IntegraPSG still gets things wrong would be valuable. What kinds of scenes or relationships trip it up? This isn't a weakness – it shows maturity, helps others build on your work, and guides future fixes.
Response 5: We sincerely thank you for your insightful comments and suggestions. Your feedback has encouraged us to more clearly articulate the limitations of IntegraPSG, thereby enhancing the transparency and readability of our work. We provide a concise analysis (lines 666 to 669) of the scenes and relationship types that challenge IntegraPSG, complemented by Section 4.5, which presents detailed qualitative examples. We believe that, guided by your comments, these additions not only enhance the manuscript’s clarity but also provide valuable guidance for future research building on our approach. For your convenience, all content in Section 4.5 is newly added, with the subsection title marked in red.
Comments 6: Refs are Solid! Excellent job on the references – they cover all the right bases and show you know the field inside out.
Response 6: We sincerely thank you for your kind words and positive feedback on our references. We greatly appreciate your recognition and are glad that the cited literature meets the expectations for thoroughness and relevance.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe article can be published.
Reviewer 3 Report
Comments and Suggestions for AuthorsThe author has made revisions according to the comments, and I think it can be accepted