Optimization of Maritime Target Element Resolution Strategies for Non-Uniform Sampling Based on Large Language Model Fine-Tuning
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsWhile the paper targets highly relevant topic, incorporating trending Large Language Models (LLMs) for non-uniform data sampling problem in maritime use cases which would definitely attract readers due to its novelty and uniqueness, the paper still has some major drawbacks when it comes to structure and content organization. Therefore, before any further step towards publication, the following corrections and extensions have to be considered within the revised version of the paper:
- Abstract could also mention how much is the proposed approach better / how much is the improvement compared to similar or relevant solutions
- introduction is too long. It contains too many figures and significant amount of text which should be distributed within other sections - such as background, related works and methodology. Introduction should be shorter and focus more on the problem addressed, challenges tackled and practical implications. Fig. 1-3 might be out later in the text, but not in introduction.
- towards the end of introduction, overview of the rest of the paper’s structure could be included
- section 2 is quite long and combines both the background and related works. However, the organization of the content should be improved and more clearly distinguished
- paper should provide overview of similar / comparable related works within a distinct section or subsection of section 2, where tabular summary of underlying approaches, models and use cases might be included as well
- Please make sure that the quality of figures and readability of text is consistent and not blurred, especially in larger diagrams (especially Fig. 4 and Fig. 5, where the text seems slightly blurred)
- Conclusion might include discussion of future works and potential further extensions
- More detailed discussion of both the advantages and limitations compared to other solutions should be included in the section before conclusion, while the last subsection should be removed. Conclusion usually has no subsections in most journal paper formats.
- make sure that the titles of sections and subsections are intuitive enough and not too long
- Fig. 1 - text labels should be enlarged
- larger number of more recent references should be included (especially the related works)
Author Response
Manuscript ID:jmse-3826776 Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning
Response to reviewers
Thank you for your detailed and meticulous review of our manuscript titled " Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning ". We deeply value the detailed and insightful feedback you have provided, which not only identifies areas for enhancement but also recognizes the innovative contributions of our research to the evolving field of inertial navigation systems.
Acknowledging the importance of your comments, we have undertaken a careful and comprehensive revision of our manuscript. We have strived to address each concern you raised, aiming to improve the manuscript's clarity, completeness, and academic rigor.
Our primary objective through these revisions is to refine our manuscript such that it more effectively communicates its significance, methodologies, and findings, thereby ensuring it substantially contributes to the ongoing scholarly dialogue within the inertial navigation domain. We are resolute in our commitment to enhance the manuscript in accordance with your esteemed feedback and eagerly anticipate any further advice that could elevate our work's quality and scholarly impact.
We express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. We are committed to resolving any outstanding issues and advancing the manuscript towards publication. Once again, we thank you for your rigorous review and constructive criticism.
1.Comment: Abstract could also mention how much is the proposed approach better / how much is the improvement compared to similar or relevant solutions
1.Response: Thank you for your valuable comment on supplementing the improvement extent of the proposed method compared to similar/relevant solutions. In the revised abstract, we have explicitly added quantitative comparison results with five representative control models (covering original general LLMs, rule-only models, data-only fine-tuned models, and mainstream general LLMs like ChatGPT-4.0 and DeepSeek) to highlight the method’s advantages.
Specifically, the proposed method achieves a comprehensive Final Score of 0.8133, which is 37.1% higher than the suboptimal data-only fine-tuned model (Final Score = 0.5933) and 87.7% higher than the original general LLM (Final Score = 0.4333). In high-risk avoid scenarios (critical for navigation safety), its Top-1 Accuracy (0.7333) is 46.7% higher than the suboptimal control model, and Scene-Sensitive Recall (0.7333) is 2.2 times that of the original general LLM. For close and away scenarios, the proposed method’s Top-1 Accuracy (0.8667 and 0.9000 respectively) also significantly outperforms all control models. These quantitative data clearly demonstrate the improvement of the proposed method over relevant solutions, addressing the requirement of your comment.
Corresponding modified position: Abstract
Page: 1
Abstract: This is the only paragraph.
Lines: 12 - 27
2.Comment: Introduction is too long. It contains too many figures and significant amount of text which should be distributed within other sections - such as background, related works and methodology. Introduction should be shorter and focus more on the problem addressed, challenges tackled and practical implications. Fig. 1-3 might be out later in the text, but not in introduction.
2.Response: Thank you for your valuable comment on the length and content arrangement of the Introduction section. We fully agree with your suggestion that the Introduction should be concise, focus on core elements, and reasonably distribute redundant content to other sections. We have made targeted revisions as follows:
- Shortening the Introduction and Focusing on Core Content
We have deleted redundant content in the original Introduction, including the detailed evolution process of large language models (e.g., specific parameter details of GPT-5, Gemini 2.5, and Llama-3.1) and the overly detailed description of fine-tuning application cases in non-maritime fields (e.g., medical clinical document classification and financial stock return prediction). The revised Introduction is concentrated on three core parts:
Problem Addressed: Clearly stating the defects of traditional maritime target element resolution methods in non-uniform sampling, data missing, and noisy scenarios (relying on manual experience and uniform sampling, leading to low accuracy and efficiency), and the limitation of general large language models (LLMs) in direct application (general knowledge gaps with maritime needs).
Challenges Tackled: Summarizing three core challenges to be solved in this study (domain adaptation gaps of LLMs in maritime scenarios, high computational cost of fine-tuning, and difficulty in adapting traditional models to non-uniform sampling).
Practical Implications: Briefly pointing out that the proposed method can enhance the accuracy and adaptability of maritime target element resolution, and promote the application of LLMs in navigation safety.
- Reassigning Content to Corresponding Sections
The content about the evolution of LLMs and cross-domain application cases, which was originally in the Introduction, has been moved to Section 2 "Review of Research Status" (2.1 "Fine-tuning Techniques for Large Language Models") to enrich the review of related work and avoid disrupting the logical flow of the Introduction.
The detailed description of the "hierarchical prompt system" and "non-uniform sampling-model collaboration mechanism" in the original Introduction has been transferred to Section 3 "Non-uniform Sampling Strategy Design based on Large Language Model Fine-tuning" and Section 4 "Construction of Maritime Target Element Resolution Model and Strategy Adaptive Optimization Model", respectively, to make the methodology section more comprehensive and detailed.
- Adjusting the Placement of Figures 1-3
In accordance with your suggestion, we have moved Figures 1-3 out of the Introduction and placed them in the corresponding content sections to ensure that figures match the text logically:
Figure 1 (Model Structure): Originally in the Introduction, it is now placed in Section 4.1 "Architecture Design of Maritime Target Element Resolution Model" (as the core framework diagram of the model architecture, matching the description of the "hierarchical prompt system" and "four core modules" in this section).
Original Figure 2 (Text Information Comprehension Test) and Original Figure 3 (Navigational Data Comprehension Test): Originally in the Introduction, they have been relocated to the beginning of Section 3 and renumbered as Figure 3-1 and Figure 3-2 respectively. These two figures serve as the basis for explaining the "non-uniform sampling strategy design", supporting the subsequent discussion on how to optimize sampling logic based on the model's text and navigational data comprehension capabilities in Section 3.
After the above revisions, the Introduction has become more concise and focused, and the overall structure of the article is more reasonable, with each section's content being more targeted. We have carefully checked the logical connection between sections to ensure the smoothness of the full text.
Thank you again for your professional guidance, which has significantly improved the quality of this manuscript.
1.Shortening the Introduction
Pages: 1–3
Paragraphs: Introduction, Paragraphs 1–7
Global lines: 76–123
- Content moved to Section 2.1
Pages: 4–5
Paragraphs: First 3 paragraphs of Section 2.1
Global lines: 126–145
- Content moved to Section 3
Pages: 9–12
Paragraphs: First 6 paragraphs of Section 3
Global lines: 186–225
4.Content moved to Section 4
Pages: 13–18
Paragraphs: Section 4 opening through Section 4.2
Global lines: 227–247
- Figures relocation
Figure 1 (Model Structure) → moved to Section 4.1 (page 13, line 61), renumbered Figure 4-1.
Figure 2 (Text Comprehension Test) → moved to Section 3 (page 9, line 31), renumbered Figure 3-1.
Figure 3 (Navigational Data Comprehension Test) → moved to Section 3 (page 10, line 32), renumbered Figure 3-2.
3.Comment: towards the end of introduction, overview of the rest of the paper’s structure could be included
3.Response: Thank you for your constructive comment regarding adding an overview of the paper’s structure at the end of the Introduction. We fully agree that this addition will help readers quickly grasp the logical framework of the study, and we have supplemented the content in line with the revised structure of the paper, as detailed below:
We have added a 1-paragraph structure overview at the end of the revised Introduction (after clarifying the problem addressed, challenges tackled, and practical implications). The specific content is as follows:
" The remainder of this paper is structured as follows: Section 2 reviews related work, including LLM fine-tuning techniques, optimal selection methods based on LLMs, non-uniform sampling theory, and maritime target element resolution funda-mentals; Section 3 designs the LLM selection, fine-tuning strategy, and non-uniform sampling innovation for maritime scenarios; Section 4 constructs the maritime target element resolution model with a hierarchical prompt system; Section 5 verifies the method’s effectiveness through experiments; Section 6 summarizes conclusions and future work."
This overview strictly aligns with the revised section divisions (e.g., the relocation of content to Section 2, 3, and 4, and the adjustment of figure positions) mentioned in our previous revisions. It avoids redundant details and only outlines the core focus of each section, ensuring the Introduction remains concise while guiding readers effectively.
We have checked that the added content integrates smoothly with the preceding parts of the Introduction, without disrupting the logical flow of the core content. Thank you again for your professional advice, which further improves the readability and structural clarity of the manuscript.
Added overview of paper structure
Pages: Page 3
Paragraphs: Final paragraph of Introduction
Global lines: 117-123
4.Comment: section 2 is quite long and combines both the background and related works. However, the organization of the content should be improved and more clearly distinguished
4.Response: We sincerely appreciate the reviewer’s insightful comment. In the revised manuscript, we have carefully reorganized Section 2 to provide a clearer separation between the background and related works:
The general background on maritime target element resolution and the challenges of non-uniform sampling has been streamlined and consolidated into the end of Section 1. This ensures that Section 2 is dedicated primarily to reviewing existing research.
Section 2 has been retitled as “Review of Research Status” and is now structured into four well-defined subsections:
2.1 Fine-tuning Techniques for Large Language Models
2.2 Optimal Selection Methods Based on Large Language Models
2.3 Non-uniform Sampling Theory
2.4 Resolution Basis of Maritime Target Elements
Each subsection has been refined for conciseness, with smoother transitions added to highlight the logical progression of ideas. The background context and related works are now clearly distinguished, making it easier for readers to understand how prior research motivates our proposed approach.
We believe these revisions significantly improve the clarity, logical flow, and readability of the manuscript, and better align the structure with the reviewer’s recommendation.
Section 2 Restructuring and Streamlining
Pages: 4–8
Paragraphs: Entire Section 2 content
Global lines: 124-247
5.Comment: paper should provide overview of similar / comparable related works within a distinct section or subsection of section 2, where tabular summary of underlying approaches, models and use cases might be included as well
5.Response:We sincerely thank the reviewer for this constructive suggestion. In the revised manuscript, we have enhanced Section 2 by explicitly adding a distinct review of comparable related works:
Section 2 has been restructured into clear subsections (2.1–2.4) to improve readability and ensure that related works are more systematically presented.
Within Section 2.1 (Fine-tuning Techniques for Large Language Models), we have incorporated a new Table 2-1 that provides a comparative summary of mainstream parameter-efficient fine-tuning (PEFT) methods (e.g., Prefix Tuning, Prompt Tuning, LoRA, DyLoRA, AdaLoRA, QLoRA). The table outlines their core principles, advantages, limitations, and typical application scenarios.
Similarly, in Section 2.3 (Non-uniform Sampling Theory), we have included Table 2-2, which summarizes different non-uniform sampling approaches (random, pseudo-random, adaptive), their mechanisms, advantages, limitations, and suitable maritime scenarios.
These tabular summaries, together with the narrative review, provide a clearer and more comprehensive overview of existing approaches, models, and their use cases. We believe that this structured presentation substantially strengthens the manuscript by enabling readers to more easily compare related works and understand how our proposed method differentiates itself.
Added comparative review tables in Section 2
Pages: 4–8
Paragraphs: Section 2.1 and Section 2.3
Global lines:
Line 146: “Table 2-1 Comparison of Mainstream PEFT Methods”
Line 193: “Table 2-2 Classification and Characteristics of Non-Uniform Sampling”
6.Comment: Please make sure that the quality of figures and readability of text is consistent and not blurred, especially in larger diagrams (especially Fig. 4 and Fig. 5, where the text seems slightly blurred)
6.Response: We sincerely thank the reviewer for pointing this out. In the revised manuscript, we have improved the quality and resolution of all figures, with particular attention to Figures 4 and 5:
1.The figures have been regenerated at higher resolution to ensure that all text and graphical elements remain sharp and fully legible in both digital and print formats.
2.The font sizes within the figures have been standardized and enlarged where necessary to enhance readability.
3.We also checked all other figures to ensure consistency of quality and clarity across the manuscript.
We believe these improvements resolve the issue. Should the reviewer still find any figure unclear, we will further remake and refine the diagrams to fully ensure the quality of the paper.
Improving figure quality and readability (focus on Fig. 4 and Fig. 5)
Pages: 18–20
Paragraphs/Figures: Section 4.4 and Sections 4.4.1–4.4.2
Global lines:636、649
7.Comment: Conclusion might include discussion of future works and potential further extensions
7.Response: We thank the reviewer for this valuable suggestion. In the revised manuscript, we have extended the Conclusion (Section 6) to explicitly discuss possible future research directions and potential extensions of our work. Specifically, we have added three aspects:
Expanding extreme scenario data and knowledge: Collecting additional edge cases such as multi-ship cooperative collision avoidance and ultra-long data missing, and incorporating more prior knowledge (e.g., priority ranking rules under extreme sea conditions).
Introducing multimodal fusion modules: Designing lightweight CNN branches to integrate radar image features with textual and numerical data, thereby improving robustness in low-visibility scenarios.
Optimizing model lightweight deployment: Applying quantization and knowledge distillation (e.g., QLoRA, pruning) to compress the model for real-time applications on resource-limited shipboard devices.
We believe these additions enrich the conclusion, providing readers not only with a summary of current contributions but also a clear perspective on future works and potential extensions of our study.
Added discussion of future work in Conclusion
Pages: 28–29
Paragraphs: Section 6 Conclusions, final part
Global lines: approx. 954-975
8.Comment: More detailed discussion of both the advantages and limitations compared to other solutions should be included in the section before conclusion, while the last subsection should be removed. Conclusion usually has no subsections in most journal paper formats.
8.Response: We sincerely thank the reviewer for this helpful suggestion. In the revised manuscript, we have made the following changes:
Enhanced comparative discussion:
In Section 5 (Results and Analysis), we have expanded the discussion to provide a more detailed comparison of our proposed method against other baseline and control models.
This includes highlighting the advantages (e.g., higher Top-1 accuracy in close and away scenarios, improved scene-sensitive recall in high-risk avoid scenarios) and explicitly acknowledging the limitations (e.g., reliance on sufficient annotated maritime data, performance degradation in extreme scenarios not covered by training).
These points are now clearly articulated to give readers a balanced view of the strengths and weaknesses of our approach compared with existing solutions.
Restructuring of the Conclusion:
Following the reviewer’s advice, the last subsection of the Conclusion has been removed.
The Conclusion is now presented as a single, cohesive section without subsections, in line with the common format of journal papers.
We believe these revisions make the manuscript more concise and reader-friendly, while also providing a clearer and more balanced assessment of our method relative to other solutions.
Expanded discussion of advantages and limitations in results
Pages: 19–27
Paragraphs: Section 5 Results (especially 5.2.3 Comprehensive performance comparison and advantage analysis)
Global lines: 857-911
Restructured Conclusion
Pages: 28–29
Paragraphs: Section 6 Conclusions
Global lines: 924-974
9.Comment: make sure that the titles of sections and subsections are intuitive enough and not too long
9.Response: We appreciate the reviewer’s careful observation. In the revised manuscript, we have carefully reviewed and refined the titles of sections and subsections to ensure that they are concise, intuitive, and reader-friendly:
Long or overly descriptive titles have been shortened while retaining their essential meaning. For example, Section 2 has been retitled simply as “Review of Research Status”, with its subsections named in a straightforward manner (e.g., “Fine-tuning Techniques for Large Language Models”, “Non-uniform Sampling Theory”, “Resolution Basis of Maritime Target Elements”).
Section 3 has been renamed as “Non-uniform Sampling Strategy Design” to make it more concise and easier to follow.
Across the manuscript, parallel structure and consistent phrasing have been applied to subsection titles, avoiding redundancy and ensuring intuitive readability.
We believe these revisions make the structure of the manuscript clearer and more accessible to readers, fully aligning with the reviewer’s recommendation.
English Version
Refining and shortening section/subsection titles
Pages: 4–13
Paragraphs: Section 2, Section 3, and several subsections
Global lines: approx. 124、126、303…..
10.Comment: Fig. 1 - text labels should be enlarged
10.Response: We thank the reviewer for this practical suggestion. In the revised manuscript, we have enlarged the text labels in Figure 1 to ensure better readability and clarity. The updated figure now has appropriately sized labels, making all elements easily legible in both digital and print formats.
We believe this adjustment improves the overall presentation quality of the manuscript.
Enlarged text labels in Figure 1
Pages: Page 13
Paragraphs/Figures: Section 4.1 Overall Design of the Resolution Model
Global lines: around line 442
11.Comment: larger number of more recent references should be included (especially the related works)
11.Response: We sincerely thank the reviewer for this important suggestion. In the revised manuscript, we have updated and expanded the reference list to include a larger number of more recent works, particularly in the related works section (Section 2). Specifically:
We incorporated recent studies published between 2023–2025, covering advancements in large language model fine-tuning techniques, parameter-efficient tuning methods, and non-uniform sampling applications.
Examples include recent works on GPT-5, Gemini 2.5 and Llama-3.1, as well as the latest research on maritime target detection and non-uniform sampling visualization.
These updated references strengthen the literature review, provide readers with up-to-date insights, and better contextualize our contributions within current research trends.
We believe that these additions substantially enhance the comprehensiveness and currency of the manuscript.
Added and updated recent references
Pages: 34–36
Paragraphs: References
Global lines: 999、1017、1019….
Reviewer 2 Report
Comments and Suggestions for AuthorsThe title and objectives stated by the authors in the abstract of the paper correspond to its content.
The work is well structured, the bibliographic study well analyzed, with minor problems with the bibliographical references.
Observations:
- Bibliographic references are not listed in order in the text. The authors start with references [13, 14] and then [1], etc.
- Reference missing in text [9]
- Bibliographic references listed incompletely, e.g. 24, 41.
Chapter 3 is coherent and innovative. The authors propose the integration of three directions: adapting the LLM model to the maritime domain; an efficient fine-tuning strategy for low cost and robustness; and respectively innovation in non-uniform sampling with semantic support from LLM for solving maritime target elements.
But, the choice of Doubao-Seed-1.6 is not justified compared to other LLMs (e.g. LLaMA 2, MPT). The preprocessing presented by the authors is very laborious, it is not clear whether it can be applied to real-time streams.
Suggestion: Including a comparative benchmark (Doubao vs. LLaMA-2 vs. MPT) on a maritime dataset.
In chapter 4, the authors propose a model composed of three modules:
- Multi-source data preprocessing and fusion – a unification of radar, AIS, sonar, etc. inputs
- Rule-guided LLM inference – extracting target elements through semantic reasoning (e.g., recognizing collision avoidance scenarios).
- Optimized numerical calculation – application of estimation algorithms (extended Kalman filtering, adaptive spline interpolation) to determine the final parameters.
Data from different sensors is synchronized and transformed into a unified framework. LLM ensures semantic correlation, and statistical algorithms reduce noise and inconsistency.
The proposed model was evaluated on a mixed set of real data collected from navigation logs and AIS.
Suggestion: The data collection period should be specified.
The data was divided into 70% - 15% - 15% for training, validation, and testing, respecting the balance of sampling scenarios.
And, the statistical results obtained are good, for example:
- the average reduction in position error is 29.6% compared to conventional approaches.
- ANOVA analysis shows that the greatest gain is in critical scenarios, where errors were reduced by up to 45%.
- the computational cost has been reduced by almost half thanks to the prefix tuning + LoRA strategy.
The results obtained confirm the feasibility of implementation on computing systems with limited resources.
In conclusion, the scientific content of the article is good, and the results obtained are relevant.
Author Response
Manuscript ID:jmse-3826776 Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning
Response to reviewers
Thank you for your detailed and meticulous review of our manuscript titled " Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning ". We deeply value the detailed and insightful feedback you have provided, which not only identifies areas for enhancement but also recognizes the innovative contributions of our research to the evolving field of inertial navigation systems.
Acknowledging the importance of your comments, we have undertaken a careful and comprehensive revision of our manuscript. We have strived to address each concern you raised, aiming to improve the manuscript's clarity, completeness, and academic rigor.
Our primary objective through these revisions is to refine our manuscript such that it more effectively communicates its significance, methodologies, and findings, thereby ensuring it substantially contributes to the ongoing scholarly dialogue within the inertial navigation domain. We are resolute in our commitment to enhance the manuscript in accordance with your esteemed feedback and eagerly anticipate any further advice that could elevate our work's quality and scholarly impact.
We express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. We are committed to resolving any outstanding issues and advancing the manuscript towards publication. Once again, we thank you for your rigorous review and constructive criticism.
1.Comment:
1.Bibliographic references are not listed in order in the text. The authors start with references [13, 14] and then [1], etc.
2.Reference missing in text [9]
3.Bibliographic references listed incompletely, e.g. 24, 41.s
1.Response: We sincerely thank the reviewer for carefully checking the references. In the revised manuscript, we have made the following corrections:
Reordered references: All in-text citations have been carefully checked and reordered to ensure that references are listed sequentially in the order of their first appearance in the text, in full compliance with the journal’s referencing style.
Addressed missing reference [9]: In the original version, we did use the content of Reference [9] in the text, but the citation was accidentally omitted. We have now added the missing citation back into the appropriate place in the manuscript, and updated the reference list accordingly.
Completed incomplete references: All bibliographic entries (including previously incomplete ones such as [24] and [41]) have been corrected and completed with full author names, titles, publication venues, year, and DOI/URL where applicable.
We believe these revisions have resolved the issues raised and ensured that the reference list is now accurate, consistent, and complete.
Corrections on reference order, missing citation, and incomplete entries
Pages: 2–36 (all citation locations in text + References section at the end)
Paragraphs: Citations in Introduction and Section 2 (Review of Research Status); References list
Global lines: approx. 47、51、56、1054……
2.Comment: Chapter 3 is coherent and innovative. The authors propose the integration of three directions: adapting the LLM model to the maritime domain; an efficient fine-tuning strategy for low cost and robustness; and respectively innovation in non-uniform sampling with semantic support from LLM for solving maritime target elements.
But, the choice of Doubao-Seed-1.6 is not justified compared to other LLMs (e.g. LLaMA 2, MPT). The preprocessing presented by the authors is very laborious, it is not clear whether it can be applied to real-time streams.
Suggestion: Including a comparative benchmark (Doubao vs. LLaMA-2 vs. MPT) on a maritime dataset.
2.Response: We sincerely thank the reviewer for this insightful comment. In this study, our primary goal was to validate the effectiveness of the proposed methodology. Therefore, we deliberately chose a relatively lightweight and localized model (Doubao-Seed-1.6) as the baseline, in order to demonstrate that even on a more basic model, the integration of domain adaptation, efficient fine-tuning, and non-uniform sampling innovation can already achieve strong results.
We agree with the reviewer that a comparative benchmark with other widely used models (e.g., LLaMA 2, MPT) would provide additional value. While such a comparison is beyond the scope of this current paper, we appreciate this suggestion and will extend our work in this direction in future studies. To address this point, we have also noted in the Conclusion and Future Work section that further research will explore benchmarking our approach against multiple LLMs and assessing the feasibility of preprocessing methods for real-time maritime data streams.
We believe this clarifies our design choice in the current study while outlining the pathway for more comprehensive comparisons in subsequent research.
Explanation of Doubao-Seed-1.6 choice and rationale
Pages: 9–12 (Section 3.1 Selection and Adaptation of Large Language Models)
Paragraphs: 3.1.1 Basis for model selection
Global lines: approx. 305-329
Future benchmark studies in Conclusion
Pages: Page 29 (Section 6 Conclusions, Future Work part)
Paragraphs: Final paragraph of Conclusion
Global lines: 971-975
3.Comment: In chapter 4, the authors propose a model composed of three modules:
Multi-source data preprocessing and fusion – a unification of radar, AIS, sonar, etc. inputs
Rule-guided LLM inference – extracting target elements through semantic reasoning (e.g., recognizing collision avoidance scenarios).
Optimized numerical calculation – application of estimation algorithms (extended Kalman filtering, adaptive spline interpolation) to determine the final parameters.
Data from different sensors is synchronized and transformed into a unified framework. LLM ensures semantic correlation, and statistical algorithms reduce noise and inconsistency.
The proposed model was evaluated on a mixed set of real data collected from navigation logs and AIS.
Suggestion: The data collection period should be specified.
3.Response: We thank the reviewer for this valuable suggestion. In the revised manuscript, we have clarified the data collection period for the real-world datasets used in our evaluation. Specifically, the measured multi-source data (navigation logs, AIS, radar, and sonar) were collected during the period 2021–2025. This information has been added in Section 4.3.1 (Collection and Processing of Measured Data) to provide greater transparency and reproducibility of our experiments.
We believe this addition improves the completeness and clarity of the dataset description.
Added data collection period in Chapter 4
Pages: 16–17
Paragraphs: Section 4.3.1 Collection and Processing of Measured Data
Global lines: approx. 527
Reviewer 3 Report
Comments and Suggestions for AuthorsReviewer’s report on JMSE paper titled “Optimization of maritime target element resolution strategies for non-uniform sampling based on large language model fine-tuning”
The paper proposes a fine-tuned LLM-based adaptive optimization method to determine maritime target element extraction approaches for non-uniform sampling. The topic is timely and potentially impactful, but several areas need improvement before the paper can be considered further.
Frist of all, the abstract should be revised to better highlight the main contributions and novelty of the work. At present, it is too brief and does not clearly explain the unique aspects of the proposed method or the key findings.
The overall quality of the English writing must be improved too. In several places, sentences are not clear, making it difficult to follow the text.
The Introduction is unnecessarily lengthy and provides mostly generic background on LLMs. Instead, it should focus more directly on the relevance of LLMs to maritime target element extraction strategies. Figures 2 and 3 are not explained well and require more detailed discussion. It would also strengthen the paper to include references to recent advances in LLM and GPT models to highlight the state-of-the-art.
Section 2 is also too lengthy and would benefit from restructuring. A table summarizing prior works, their contributions, advantages, and limitations, as well as how the current study differs, would make this section much clearer and more impactful.
Section 3 presents the sampling strategy, explains the rationale for choosing LLM models, and discusses the fine-tuning approach. Including a flowchart or schematic diagram to illustrate the steps of the process (or overall methodology) would make the section clearer.
Section 4 represents the core of the paper and introduces the proposed model. The section currently seems disorganized. The subheadings are inconsistent and do not clearly reflect the logical flow of the content, making it difficult for readers to follow.
Section 5 presents the results of the LLM approach and includes some comparative analyses. However, the discussion of these results needs significant improvement. In particular, the authors should clearly specify which performance metrics were used in the comparative analyses and explain why these metrics were chosen. A more detailed interpretation of the results, highlighting both strengths and limitations of the proposed approach in relation to existing methods, would also strengthen this section.
Author Response
Manuscript ID:jmse-3826776 Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning
Response to reviewers
Thank you for your detailed and meticulous review of our manuscript titled " Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning ". We deeply value the detailed and insightful feedback you have provided, which not only identifies areas for enhancement but also recognizes the innovative contributions of our research to the evolving field of inertial navigation systems.
Acknowledging the importance of your comments, we have undertaken a careful and comprehensive revision of our manuscript. We have strived to address each concern you raised, aiming to improve the manuscript's clarity, completeness, and academic rigor.
Our primary objective through these revisions is to refine our manuscript such that it more effectively communicates its significance, methodologies, and findings, thereby ensuring it substantially contributes to the ongoing scholarly dialogue within the inertial navigation domain. We are resolute in our commitment to enhance the manuscript in accordance with your esteemed feedback and eagerly anticipate any further advice that could elevate our work's quality and scholarly impact.
We express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. We are committed to resolving any outstanding issues and advancing the manuscript towards publication. Once again, we thank you for your rigorous review and constructive criticism.
1.Comment: Frist of all, the abstract should be revised to better highlight the main contributions and novelty of the work. At present, it is too brief and does not clearly explain the unique aspects of the proposed method or the key findings.
The overall quality of the English writing must be improved too. In several places, sentences are not clear, making it difficult to follow the text.
1.Response:
We sincerely thank the reviewer for this important feedback. In the revised manuscript, we have made the following improvements:
1.Abstract revision:
The abstract has been substantially rewritten to better highlight the main contributions and novelty of our work. It now explicitly states:
- The motivation and limitations of traditional methods in maritime target element resolution.
- The novel hybrid fine-tuning strategy (“Prefix Tuning + LoRA”) combined with domain adaptation.
- The innovation of integrating non-uniform sampling optimization with LLMs, forming a closed-loop “sampling–resolution–optimization” mechanism.
- The key experimental findings, including performance improvements in Top-1 Accuracy and Scene-Sensitive Recall across different maritime scenarios compared with baseline models.
2.Improvement of English writing:
We have carefully revised the entire manuscript to improve clarity, grammar, and readability. Long or unclear sentences were simplified, redundant wording was removed, and transitions between sections were smoothed. Professional editing tools and manual proofreading were also applied to enhance the overall quality of English writing.
We believe these revisions significantly strengthen the abstract by clearly presenting the novelty and contributions, while also improving the readability and accessibility of the entire manuscript.
Abstract rewritten to highlight contributions and novelty
Pages: Page 1
Paragraphs: Abstract
Global lines: 9-27
2.Comment: The Introduction is unnecessarily lengthy and provides mostly generic background on LLMs. Instead, it should focus more directly on the relevance of LLMs to maritime target element extraction strategies. Figures 2 and 3 are not explained well and require more detailed discussion. It would also strengthen the paper to include references to recent advances in LLM and GPT models to highlight the state-of-the-art.
2.Response: We sincerely thank the reviewer for this constructive feedback. In the revised manuscript, we have made the following changes:
1.Streamlined Introduction:
The Introduction has been shortened to reduce generic background on LLMs. The revised text now focuses more directly on the specific relevance of LLMs to maritime target element resolution, emphasizing challenges such as domain adaptation gaps, data sparsity under non-uniform sampling, and the need for efficient fine-tuning strategies.
2.Improved explanation of Figures 2 and 3:
The captions and in-text discussions of Figures 2 and 3 have been expanded to provide clearer explanations of their content and relevance. We now detail how each figure illustrates the hierarchical prompt system and the fine-tuning workflow, and how these components contribute to the overall methodology.
3.Updated references to state-of-the-art LLMs:
To strengthen the literature review, we have added recent references on advances in large language models, including GPT-5 (2025), Google Gemini 2.5, and Meta’s Llama-3.1 series, highlighting their relevance to multimodal integration, long-context processing, and domain adaptability. These additions better position our work within the current state-of-the-art.
We believe these revisions make the Introduction more concise, improve the clarity of Figures 2 and 3, and provide a stronger connection to the latest research progress in LLMs.
Streamlined Introduction with relevance focus
Pages: 1–3
Paragraphs: Introduction
Global lines: 31-123
Expanded explanation of Figures 2 and 3
Pages: 9–12
Paragraphs: Section 3 Non-uniform Sampling Strategy Design
Global lines: approx.186-225
Updated references to state-of-the-art
Pages: 1-3、34–36
Paragraphs: Introduction、 References
Global lines: 76-91、1061、1017…
3.Comment: Section 2 is also too lengthy and would benefit from restructuring. A table summarizing prior works, their contributions, advantages, and limitations, as well as how the current study differs, would make this section much clearer and more impactful.
3.Response: We sincerely thank the reviewer for this valuable suggestion. In the revised manuscript, we have addressed this concern in the following ways:
1.Restructuring Section 2:
Section 2 has been reorganized into four concise subsections:
2.1 Fine-tuning Techniques for Large Language Models
2.2 Optimal Selection Methods Based on Large Language Models
2.3 Non-uniform Sampling Theory
2.4 Resolution Basis of Maritime Target Elements
This restructuring reduces redundancy and makes the narrative flow clearer.
2.Inclusion of summary tables:
To improve clarity and impact, we have added tabular summaries within Section 2. Specifically:
Table 2-1 compares mainstream parameter-efficient fine-tuning (PEFT) methods, summarizing their principles, advantages, limitations, and application scenarios.
Table 2-2 outlines different types of non-uniform sampling, their mechanisms, strengths, weaknesses, and suitable maritime applications.
3.Clarifying the novelty of our study:
In the revised text, we explicitly highlight how our proposed approach differs from prior works, emphasizing our hybrid fine-tuning strategy and integration of non-uniform sampling optimization with LLMs.
We believe these revisions make Section 2 more concise, structured, and impactful, while providing readers with a clearer overview of related works and the distinct contributions of our study.
Section 2 restructuring and summary tables
Pages: 4–8
Paragraphs: Section 2 (reorganized into four subsections)
Global lines: approx. 124-247
4.Comment: Section 3 presents the sampling strategy, explains the rationale for choosing LLM models, and discusses the fine-tuning approach. Including a flowchart or schematic diagram to illustrate the steps of the process (or overall methodology) would make the section clearer.
4.Response:We sincerely thank the reviewer for this valuable suggestion. In the revised manuscript, we have added a schematic diagram (Figure 3-3) in Section 3 to visually illustrate the overall methodology. The figure provides a clear flow of the process, including:
1.Selection and adaptation of LLMs (model choice and domain-specific preprocessing).
2.Fine-tuning strategy (Prefix Tuning + LoRA hybrid approach).
3.Innovation in non-uniform sampling strategy (semantic-based sampling point selection and multi-source data fusion).
This flowchart clarifies the logical sequence of steps and highlights how each component contributes to the overall framework. We believe this addition significantly improves the readability and accessibility of Section 3, as recommended.
Added flowchart in Section 3
Pages: Page 12
Paragraphs/Figures: Section 3 Non-uniform Sampling Strategy Design
Global lines: around line 428
5.Comment: Section 4 represents the core of the paper and introduces the proposed model. The section currently seems disorganized. The subheadings are inconsistent and do not clearly reflect the logical flow of the content, making it difficult for readers to follow.
5.Response: Thank you for your valuable comment on Chapter 4 of our manuscript. You pointed out that "Section 4 represents the core of the paper and introduces the proposed model. The section currently seems disorganized. The subheadings are inconsistent and do not clearly reflect the logical flow of the content, making it difficult for readers to follow." We fully agree with this feedback and have made targeted revisions to optimize the structure, consistency, and readability of Chapter 4. The key revisions and responses are as follows:
- Optimized Logical Framework to Address "Disorganized Structure"
We reconstructed Chapter 4 around the core theme of "maritime target element resolution model construction and strategy adaptive optimization," following the academic research paradigm of "Overall Design → Core Implementation → Supporting Conditions → Effect Verification" to form a closed and coherent logical chain:
4.1 Overall Design of the Resolution Model: Starts with the problem background (poor adaptability, low expert knowledge utilization under non-uniform sampling) and proposes an expert prior-guided prompt learning framework. It then decomposes the framework into "core functional modules" (data preprocessing, feature extraction, resolution decision, result output) and "hierarchical prompt embedding mechanism," clarifying the model’s overall architecture and answering "what the model is."
4.2 LLM Fine-Tuning Based on Prior Knowledge Fusion: Focuses on the model’s core implementation link, explaining how to integrate navigation rules, expert experience, and historical data into LLM fine-tuning (knowledge acquisition → dynamic weight adjustment → parameter setting). This section connects with the "hierarchical prompt system" in 4.1, solving "how to enable the model to have professional reasoning capabilities."
4.3 Dataset Construction for Model Training and Testing: Addresses the model’s supporting conditions by constructing a hybrid dataset of "measured data + simulation data" (focused on non-uniform sampling scenarios). It ensures the reliability of model training and testing, responding to the "data basis" required for fine-tuning in 4.2.
4.4 Verification of Model Core Capabilities: Finally, verifies the model’s effectiveness through "text information comprehension" and "navigation data processing" tests, directly answering "whether the model works" and forming a complete research chain from design to verification.
- Standardized Subheading Nomenclature to Resolve "Inconsistent Subheadings"
We unified the naming logic of subheadings at all levels to ensure consistency and readability:
Consistent structure of Level-2 headings: All adopt the format of "Core Content + Attribute," e.g., "Overall Design of the Resolution Model" (content: overall design; attribute: of the resolution model), "LLM Fine-Tuning Based on Prior Knowledge Fusion" (content: LLM fine-tuning; attribute: based on prior knowledge fusion). This avoids confusion caused by arbitrary naming in the original version.
Logical consistency of Level-3 headings: Under each Level-2 heading, Level-3 headings follow either "sequential process" or "component division" logic. For example:
In 4.1, "Core Modules and Functional Division" (decomposing the model structure) and "Embedding Mechanism of the Hierarchical Prompt System" (explaining key mechanisms) are two core components of the "overall design," with clear division of labor.
In 4.2, "Acquisition and Structured Coding of Prior Knowledge" (preparation), "Dynamic Weight Adjustment in Fine-Tuning" (core method), and "Fine-Tuning Parameter Settings" (specific implementation) follow the process logic of "preparation → execution → parameterization."
In 4.3, "Collection and Processing of Measured Data" and "Generation of Simulation Data" correspond to two types of datasets, with parallel and complementary relationships.
This standardized naming allows readers to quickly grasp the logical connections between sections by reading subheadings alone.
- Strengthened Content Cohesion to Improve "Readability for Readers"
To avoid "information fragmentation," we added explicit connection logic between sections and clarified the focus on core scenarios:
Cross-referencing between sections: For example, 4.2.1 mentions that "structured knowledge is converted into prompt-compatible text format," echoing the "hierarchical prompt system" in 4.1.2; 4.3.2 notes that algorithm labeling is based on the "scenario-algorithm rules in Section 4.1.2," forming a cross-link between dataset construction and model design.
Clear logical cues within sections: Each module in 4.1.1 is described using the logic of "Guidance Basis → Specific Functions → Output Results" (e.g., the Data Preprocessing Layer is "guided by bottom-level general navigation knowledge → cleans and standardizes multi-source data → outputs structured tuples"), helping readers quickly understand the role of each module.
Focus on core scenarios: The entire chapter repeatedly emphasizes "non-uniform sampling scenarios" (e.g., problem background in 4.1, simulation of random sampling features in 4.3, verification of data processing capabilities in 4.4), ensuring readers can focus on the model’s core application scenarios without being distracted by irrelevant information.
We believe the revised Chapter 4 now has a clearer logical flow, more consistent subheadings, and stronger readability, which will help readers better understand the core content of the proposed model. Thank you again for your constructive feedback, which has significantly improved the quality of our manuscript.
Restructuring Chapter 4 and standardizing subheadings
Pages: 11–19
Paragraphs: Section 4 Construction of Maritime Target Element Resolution Model and Strategy Adaptive Optimization Model
Global lines: approx. 430-665
6.Comment: Section 5 presents the results of the LLM approach and includes some comparative analyses. However, the discussion of these results needs significant improvement. In particular, the authors should clearly specify which performance metrics were used in the comparative analyses and explain why these metrics were chosen. A more detailed interpretation of the results, highlighting both strengths and limitations of the proposed approach in relation to existing methods, would also strengthen this section.
6.Response: We sincerely thank the reviewer for this constructive feedback. In the revised manuscript, we have substantially improved the discussion of results in Section 5 as follows:
Explicit specification of performance metrics:
We now clearly define the evaluation metrics used in the comparative analyses, including Top-1 Accuracy, Top-3 Accuracy, Δ-Accuracy (tolerance for slight misjudgment), and Scene-Sensitive Recall. Each metric is introduced with a formal definition (see Table 5-2) and an explanation of its relevance.
Justification of metric selection:
We explain why these metrics were chosen—namely, to capture both overall strategy accuracy (Top-1 Accuracy), robustness of near-optimal decisions (Top-3 Accuracy), tolerance for small deviations (Δ-Accuracy), and safety-critical adaptability (Scene-Sensitive Recall). This multi-dimensional evaluation framework ensures a balanced and realistic assessment of model performance in practical maritime contexts.
Reinforcing clarity in the text:
While the formal definitions of these metrics were already provided in Table 5-2, we acknowledge that, as the reviewer pointed out, the discussion in the text did not elaborate on them sufficiently. Therefore, in the revised version we briefly reintroduce the meaning of each metric in the results analysis section, so that readers can better follow the comparative discussions without needing to refer back to the table.
Expanded interpretation of results:
The discussion has been extended to highlight both the strengths of the proposed approach (e.g., significant gains in Top-1 Accuracy in “close” and “away” scenarios, and 2.2× higher Scene-Sensitive Recall in “avoid” scenarios compared to the baseline) and its limitations (e.g., reliance on annotated maritime data, reduced performance in extreme scenarios not represented in training).
We believe these revisions provide a more comprehensive and balanced interpretation of the results, thereby strengthening Section 5 and aligning it with the reviewer’s recommendation.
Expanded and improved results discussion in Section 5
Pages: 19–27
Paragraphs: Section 5 Results and Analysis
Global lines: approx. 776-791
Round 2
Reviewer 1 Report
Comments and Suggestions for Authors Thank you for taking into account the proposed changes and suggestions. After this revision, the paper becomes easier to follow and better organized.Author Response
We sincerely thank the reviewer for the positive evaluation and for acknowledging the improvements made in the revised version. We are pleased that the manuscript is now considered easier to follow and better organized. We truly appreciate the reviewer’s recognition of our efforts, and we will continue to maintain the same level of rigor and clarity in our future research.
Reviewer 3 Report
Comments and Suggestions for AuthorsSome sections, such as the introduction and literature review, have been addressed to a satisfactory level. However, several other areas still require further attention. The revised version still remains unnecessarily lengthy. In addition, the quality of the figures is below standard, they appear to have low resolution, which makes them unclear and difficult to interpret.
Author Response
Manuscript ID:jmse-3826776 Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning
Response to reviewers
Thank you for your detailed and meticulous review of our manuscript titled " Optimization of Maritime Target Element Resolution Strategies for Non-uniform Sampling Based on Large Language Model Fine-tuning ".
Acknowledging the importance of your comments, we have undertaken a careful and comprehensive revision of our manuscript. We have strived to address each concern you raised, aiming to improve the manuscript's clarity, completeness, and academic rigor. We are resolute in our commitment to enhance the manuscript in accordance with your esteemed feedback and eagerly anticipate any further advice that could elevate our work's quality and scholarly impact.
We express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. We are committed to resolving any outstanding issues and advancing the manuscript towards publication. Once again, we thank you for your rigorous review and constructive criticism.
Comment: Some sections, such as the introduction and literature review, have been addressed to a satisfactory level. However, several other areas still require further attention. The revised version still remains unnecessarily lengthy. In addition, the quality of the figures is below standard, they appear to have low resolution, which makes them unclear and difficult to interpret.
Response: We sincerely thank the reviewer for the constructive feedback. We revised Chapters 3–6 in a targeted manner to reduce unnecessary length and improve clarity. Below we provide a section-by-section explanation of the modifications, what exactly was changed, and why.:
- Conciseness and Length Control:
We carefully streamlined the manuscript to improve readability and eliminate unnecessary length, with major revisions applied to Sections 3–6. The main adjustments are detailed below:
Chapter 3 Non-uniform Sampling Strategy Design
3.1 Selection and Adaptation of Large Language Models
To avoid lengthy general explanations and highlight task-specific relevance, redundant cross-domain background descriptions were removed, and the emphasis was shifted to why Doubao-Seed-1.6 is suitable for maritime adaptation.
Page: 7-9
Lines: 271-327
3.2 Fine-tuning Strategy of LLM for Maritime Target Resolution
The long narrative about PEFT methods was condensed into a concise parameter description, including prefix length, LoRA ratio, and optimizer settings. This change was made to make the methodology clearer and reduce repetitive explanation of tuning strategies.
Page: 9
Lines: 328-345
3.3 Innovation of Non-uniform Sampling Strategy
This section was restructured into two clear sub-strategies: “non-uniform sampling point selection based on semantic understanding” and “multi-source data fusion of non-uniform sampling method”. A structured logic chain, consisting of trigger condition → sampling adjustment → benefit point, was adopted. Case examples were consolidated and rewritten to emphasize typical scenarios. The goal was to transform the section from a lengthy narrative into a clearer and more structured strategy description.
Page: 9-10
Lines: 346-369
Chapter 4 Construction of Maritime Target Element Resolution Model
4.1.1 Four-module Design
To reduce redundancy and highlight maritime-specific features, this part was rewritten as bullet-like descriptions, with generic preprocessing details removed.
Page: 11
Lines: 388-406
4.1.2 Embedding Mechanism of the Hierarchical Prompt System
Only one representative example was retained per layer, and redundant rules were removed. This change streamlined the explanation while preserving clarity.
Page: 11
Lines: 407-424
4.4 Verification of Model Core Capabilities
Some detailed input examples and redundant explanations in test results were removed, and the content was summarized as “process + conclusion”. This was done to avoid repetition and highlight the verification outcome.
Page: 14-17
Lines: 539-561
Chapter 5 Results
5.1.1 Purpose
To sharpen the experimental motivation and reduce verbose goals, this section was rewritten into two focused research questions: (1) scene error sensitivity, (2) sub-optimal vs. optimal gap capture.
Page: 17
Lines: 564-573
Chapter 6 Conclusions
Research Results Summary
The three contributions were rewritten into structured bullet points. Some points were supplemented with quantitative results (e.g., Top-1 accuracy of 0.733 in avoidance scenarios, with ~40% improvement over baselines) .This made the conclusion more concise, evidence-based, and persuasive.
Page: 23
Lines: 814-835
Future Work
The original three directions, namely extreme data/knowledge expansion, multimodal improvements, and lightweight model adaptation, were retained. The “benchmark comparison,” previously mentioned only briefly as “in addition,” was elevated to a full standalone fourth point. This change highlighted the importance of benchmarking against mainstream models and provided a more balanced research outlook.
- Figures
All figures(Fig. 3-1、Fig. 3-2、Fig. 3-3、Fig. 4-1、Fig. 4-2、Fig. 4-3 ) in the manuscript were carefully redrawn and systematically replaced with high-resolution versions to ensure clarity and consistency throughout the paper. No additional stylistic or structural modifications were made—the focus was on improving resolution and visual quality.
Page: 7、8、10、15、16
Lines: 281、284、368、382、543、553
We hope this section-by-section explanation makes clear that Chapters 3, 4 (4.1.1, 4.1.2, 4.4), 5 (5.1.1), and 6 were carefully revised to address redundancy, improve conciseness, and enhance figure clarity. We believe the revised manuscript is now more streamlined, focused, and aligned with the journal’s standards.
Round 3
Reviewer 3 Report
Comments and Suggestions for AuthorsThe paper has been revised to an acceptable standard and can now be accepted. Thank you.