Next Article in Journal
The Influence of the Relationship Between Landmark Symbol Types, Annotations, and Colors on Search Performance in Mobile Maps Based on Eye Tracking
Previous Article in Journal
OpenStreetMap as the Data Source for Territorial Innovation Potential Assessment
 
 
Article
Peer-Review Record

Joint Translation Method for English–Chinese Place Names Based on Prompt Learning and Knowledge Graph Enhancement

ISPRS Int. J. Geo-Inf. 2025, 14(3), 128; https://doi.org/10.3390/ijgi14030128
by Hanyou Liu and Xi Mao *
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
ISPRS Int. J. Geo-Inf. 2025, 14(3), 128; https://doi.org/10.3390/ijgi14030128
Submission received: 2 January 2025 / Revised: 3 March 2025 / Accepted: 9 March 2025 / Published: 13 March 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

I have three significant criticisms of this paper.
1. You throw terms around as if the reader already knows everything. I list some of these below that need to be described in more detail. But most especially you need to provide some detail on knowledge graphs and your particular pipelined approach.
2. Many of your paragraphs are needlessly too long. There are plenty of opportunities to start new paragraphs.
3. You do not provide nearly enough detail for your figures, tables or really the algorithms you are using. You rely too much on using approaches that others have used without first explaining how they work.

Abstract:
Line 3: you might want to slightly elaborate on the inefficiency and poor accuracy.
Lines 6-7: This part is poorly structured as you should list 1) and 2) before you go into detail, or change this to read "into two parts. The first part is translation prompt..." and then have the sentence on "Based on ..." and then go on to the second one, replacing "2) with "Second, ..."
Line 18: you provide the improved accuracy with your approach but there's no baseline to compare it to in that on line 3 you don't say how inaccurate. Is 20-27% over a lousy accuracy any good? Its hard to understand these values in isolation.

Section 1:
You need to define prompt learning and knowledge graphs in this section. I suggest you define prompt learning after line 52 and briefly introduce knowledge graphs in line 65. You go on to use these terms later without having ever defined/described them.

Section 2:
Line 79: Drop the word "always" in "always been a popular topic".
Line 84: Spell out LLM the first time. "use end-to-end learning" - you assume readers will know what this means. You need to explain the term.
In this part of the section, you don't say at all about the mechanisms used for LLM learning via deep neural networks and transformers.
Lines 108-113: A figure showing a pipeline of steps would be useful.
Line 115: What is a grammatical structure tree? A parse tree?
This section is poor in that you thrown out all these terms without giving any definition or description. Others include prompt optimization, few-shot prompting, zero-shot, QLoRA, attention-based transliteration replacer, word vector, knowledge graphs.
Really, this entire section throws terms around like the reader is already an expert in all of these works. I suggest you provide more detail with all of the referenced works or provide a short section of definitions of some of these terms.

Section 3:
170: Spell out GLM.
Figure 1: You need to describe the process in the text. You refer to this figure but offer no explanation.
Table 1: Same issue, you reference this with no explanation in the text.
Line 210-211: You say specific examples of C are shown in table 1 but there is only one example, Mountain. Did I miss something?
Formulas 1&2: You need to provide some explanation for what Template is and does.
Line 219: Drop "below" from "As shown in figure 2" because the figure might be moved based on formatting.
Line 241: You jump to table 5. Since table 5 isn't placed until sometime later, you might want to make it clear that table 5 is in section 4 and will be discussed later.
Figure 3: You again refer to but do not explain the figure.
Table 3: This table is hard to decipher because you have two columns (Jackson, Stringybark) with text between them. If possible, add borders around the table cells so that it is clear when text refers to both. Later, you have Yes and No in each column but then you have Completely derived, Park and some Chinese in the first column but not the second. I really couldn't make any sense of whether this was done on purpose or if this table is just sloppy. I suggest you try to redo this table to be clearer.

At this point, I'm going to quit giving you line by line feedback. I find this paper way too vague. There are so many ideas thrown around without adequate explanation that unless someone is already an expert in the field, they will have little to no hope of understanding your approach. I suggest a thorough rewrite where you elaborate on the background work, your particular algorithm and provide more details on the figures and tables.

Author Response

Comments 1:You throw terms around as if the reader already knows everything. I list some of these below that need to be described in more detail. But most especially you need to provide some detail on knowledge graphs and your particular pipelined approach.

Response 1: Thank you very much for your correction. This article has undergone the following revisions:

     (1) For the knowledge graph, this paper adds chapter ‘3) Geographical name knowledge graph' in paragraph '2. Related Work' (lines 164-188 on page 4).

     (2) For the tubular method of Place name translation, this paper adds a chapter '2) Place name translation' to paragraph '2. Related Work' (lines 131-134 on page 3).

Comments 2: Many of your paragraphs are needlessly too long. There are plenty of opportunities to start new paragraphs.

Response 2:  Thank you very much for your correction, for which the longer paragraphs have been broken up in this article. In the paragraph ‘3.1.2. Template for translating derived place names’, Split up ‘Derived place name discrimination information prompt’,’Prompt words for derived parts’, ’Prompt words for original place names category’.

Comments 3:. You do not provide nearly enough detail for your figures, tables or really the algorithms you are using. You rely too much on using approaches that others have used without first explaining how they work.

Response 3: Thank you very much for your evaluation. In response, the following modifications have been made to this article:

  1)This article is in section 3 Add a description of the methods proposed in this article to Materials and Methods. The usage example is shown in Figure 1, and corresponding annotations are provided. (on page 4)

  2)Regarding the original Figure 4 Deleted.

  3)The content related to the principles of prompt words in the translation of derived place names in the chapter on prompt words has been deleted, and the mechanism of prompt words is not the focus of this study. This article has been deleted and instead focuses on introducing the semantic association between prompt words and input place names, while adjusting it to '3.1.2.' Template for translating derived place names' (lines 270- 310 on page 7).

Comments 4: 

 1)Abstract:Line 3: you might want to slightly elaborate on the inefficiency and poor accuracy.   

 2) Lines 6-7: This part is poorly structured as you should list 1) and 2) before you go into detail, or change this to read "into two parts. The first part is translation prompt..." and then have the sentence on "Based on ..." and then go on to the second one, replacing "2) with "Second, ...".

Response 4: Thank you very much for your evaluation. In response, the following modifications have been made to this article:

1) In the "Abstract", this paper adds the defects of traditional pipe-based geographical name translation methods. (Lines 4-5)

2) Relevant modifications were made in lines 8-9 and 16 of "Abstract" in this paper.

Comments 5: Line 18: you provide the improved accuracy with your approach but there's no baseline to compare it to in line 3 you don't say how inaccurate. Is 20-27% over a lousy accuracy any good? It's hard to understand these values in isolation.

Response 5: Thank you very much for your evaluation. In response, the following modifications have been made to this article:  Thank you very much for your comments. This study is based on the traditional pipeline-based translation of English to Chinese place names. In the fifth line of "abstract", the limitations of traditional pipeline-based translation of English to Chinese place names are added (shown in blue font, line 3-4) .

Comments 6: You need to define prompt learning and knowledge graphs in this section. I suggest you define prompt learning after line 52 and briefly introduce knowledge graphs in line 65. You go on to use these terms later without having ever defined/described them.


Response 6: Thank you very much for your evaluation. In this regard, the paper has added relevant concepts Related to Geographical name knowledge graph and the research situation of knowledge graph in the field of geographical name translation in '2. Related Work - >3) Geographical name knowledge graph'.

Comments 7: 

      1)Line 79: Drop the word "always" in "always been a popular topic".

      2)Line 84: Spell out LLM the first time. "use end-to-end learning" - you assume readers will know what this means. You need to explain the term.
In this part of the section, you don't say at all about the mechanisms used for LLM learning via deep neural networks and transformers.

   3)Lines 108-113: A figure showing a pipeline of steps would be useful.

   4)Line 115: What is a grammatical structure tree? A parse tree?
This section is poor in that you thrown out all these terms without giving any definition or description. Others include prompt optimization, few-shot prompting, zero-shot, QLoRA, attention-based transliteration replacer, word vector, knowledge graphs.

Response 7: 

     1) This article has changed it to 'has been a popular topic' (on line 92, page 3)

     2) This article has been modified as follows:

           (1) This article has added 'LLM (large language models) ' in 15 lines of 'abstract'.

           (2) This article has added a general explanation about 'use end-to-end learning'(Line106-107,Page 3)

           (3) This article has already added the Transformer model at the beginning of the chapter '1) Machine translation based on prompt learning'. (Lines 97-108,page 3).

       3) This article has been added ‘Figure 1. Example of pipeline-based place name translation’(Line 135-136,Page 4)

      4)  (1) In this paper, 'grammatical structure tree' and 'A parse tree' are expressed as' structure tree '.

          (2)prompt optimization:This article has been explained in line 115 page 3.

          (3)few-shot prompting:This article has added explanations on lines 117 and 118.

          (4)zero-shot: This article has added the relevant noun explanation in lines 123-124.

          (5)QLoRA: This article has been explained in lines 124-125.

          (6)attention-based transliteration replacer: This article adds a more detailed explanation on lines 155-160 page 4.

          (7)word vector: Deleted.

          (8)knowledge graphs: Already in '2. Related Work - >3) Geographical name knowledge graph'. A more detailed introduction has been added (lines 165-188).

 

Comments 8: 

1)170: Spell out GLM.

2)Figure 1: You need to describe the process in the text. You refer to this figure but offer no explanation.

3)Table 1: Same issue, you reference this with no explanation in the text.


4)Line 210-211: You say specific examples of C are shown in table 1 but there is only one example, Mountain. Did I miss something?


5)Formulas 1&2: You need to provide some explanation for what Template is and does.


6)Line 219: Drop "below" from "As shown in figure 2" because the figure might be moved based on formatting.


7)Line 241: You jump to table 5. Since table 5 isn't placed until sometime later, you might want to make it clear that table 5 is in section 4 and will be discussed later.


8)Figure 3: You again refer to but do not explain the figure.


9)Table 3: This table is hard to decipher because you have two columns (Jackson, Stringybark) with text between them. If possible, add borders around the table cells so that it is clear when text refers to both. Later, you have Yes and No in each column but then you have Completely derived, Park and some Chinese in the first column but not the second. I really couldn't make any sense of whether this was done on purpose or if this table is just sloppy. I suggest you try to redo this table to be clearer.

Comments 8: 

1) The corresponding explanation has been provided in line 206 page 5.

2) This article has replaced the original image and explained the process in the image. (As shown in Figure 2) (On page 6, line 224)

3) This article has added relevant explanations in Table 1. (As shown in line 223 on page 6)
4) This article has replaced 'examples' with' examples'. Place name category (lines 244 and 266)
5) Template related descriptions have been added to formulas 1 and 2 in this article. (lines 246-247, lines 267-268)
6) Delete 'below' from the entire article.
7) It has been deleted.
8) Add an explanation in Figure 3.(page 9,line 326)
9) This article has been changed to column 1. (As shown in Table 2)(page 8,line 282)

Reviewer 2 Report

Comments and Suggestions for Authors
  • While the study introduces Prompt Learning, it lacks specific examples and fails to clearly describe the exact format of the prompts and their impact on the translation process.
  • The paper mentions a top-down approach for building the knowledge graph but does not provide sufficient details regarding entity extraction, relationship definition, or data population.
  • The study only evaluates ChatGLM without comparing it against GPT-4, mT5, or BLOOM, leaving its performance advantage unverified.
  • The method is specifically designed for English-Chinese place name translation, with no discussion on its applicability to other language pairs (e.g., French-Chinese, Japanese-Chinese) or cultural variations in translation.

question:

  • Did the authors experiment with different prompt structures, such as few-shot prompting or chain-of-thought prompting, to enhance translation performance?
  • What specific data sources were used to build the knowledge graph for derived place names? Was it based on an existing geographic database, or was it generated through an automated extraction process?
  • What is the rationale behind selecting ChatGLM instead of other widely used LLMs such as GPT-4, mT5, or BLOOM? Was there any quantitative comparison conducted?
  • Can this method be extended to other languages? Would this approach be applicable to place name translations for French-Chinese, Japanese-Chinese, or Russian-Chinese? If not, what are the primary challenges in adapting it to other language pairs?

Author Response

Comments 1: While the study introduces Prompt Learning, it lacks specific examples and fails to clearly describe the exact format of the prompts and their impact on the translation process.

Response 1: Thank you for your correction. In this paper, ‘Figure 2 Translation of place names based on prompt learning’ has been added to ’3. Materials and Methods’, which shows the format of prompt words and specific examples. In the comments section of Figure 2.(line 224,page 6)

Comments 2: The paper mentions a top-down approach for building the knowledge graph but does not provide sufficient details regarding entity extraction, relationship definition, or data population.

Response2: Thank you very much for your correction. This article has been modified accordingly. the chapter "3.2. Construction of the ontology of place name translation" is added, which focuses on the construction of knowledge graph ontology.( on page 9)

Comments 3: Compared with existing methods, the description of the innovation for the method proposed in the paper is not very clear. It is suggested to strengthen the description for the innovation and the theoretical significance of the paper.

Response 3: Thank you very much. In response, this article has revised its contribution (lines 56-62 on page 2). At the same time, the significance of this study for the translation of place names between other languages and Chinese has been added (lines 423–425, page 14). In addition, the significance of this method has been added in the 'abstract' section of this article. (Page 1, lines 26-27)

Comments 4: “fully derived parts” in line 229 appears twice;

Response 4: Thank you very much for your feedback. Regarding this, this article has been revised as' i) Concept definition ', And delete the excess parts. (On page 9, lines 318-326)

Comments 5: What is the rationale behind selecting ChatGLM instead of other widely used LLMs such as GPT-4, mT5, or BLOOM? Was there any quantitative comparison conducted?

Response 5: Thank you very much for your evaluation. The basis for choosing ChatGLM in this study has been added on page 5, lines 205-210. In this study, the training for the place name translation task is carried out in the way of model fine-tuning. Due to limited experimental resources, for GPT-4, The performance of models such as mT5 and BLOOM in place name translation tasks will be a future research direction. Provided on lines 436-438 on page 14.

 

Comments 6: It is more appropriate to change “Table 5” in line 241 to be “Table 6”.

Response 6: Thank you very much for your feedback. We have removed it.

 

 

 

Reviewer 3 Report

Comments and Suggestions for Authors

Please see the attachment.

Comments for author File: Comments.docx

Author Response

Comments 1: When describing the proposed method, the corresponding flowcharts can be added.

Response 1:Thank you very much for your evaluation. In response to this, this article has added a flowchart of the traditional pipeline based place name translation method (Figure 1, page 4), as well as a flowchart of the proposed English Chinese place name joint translation method based on prompt learning (Figure 2, page 6).

Comments 2Compared with existing methods, the description of the innovation for the method proposed in the paper is not very clear. It is suggested to strengthen the description for the innovation and the theoretical significance of the paper.

Response 2: Thank you very much. In response, this article has revised its contribution (lines 56-62 on page 2). At the same time, the significance of this study for the translation of place names between other languages and Chinese has been added (page 14, lines 428-431). In addition, the significance of this method has been added in the 'abstract' section of this article. (Page 1, lines 26-27)

Comments 3: “fully derived parts” in line 229 appears twice;

Response 3: Thank you very much for your feedback. Regarding this, this article has been revised as' i) Concept definition ', And delete the excess parts. (On page 9, lines 318-326)

Comments 4: It is more appropriate to change “Table 5” in line 241 to be “Table 6”.

Response 4:  Thank you very much for your feedback. ‘The example of the English Chinese translation effect of place names in the bilingual map field is shown in Table 5.’ has been removed from this article.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Thank you for modifying the paper based on my earlier review. I find this paper to be satisfactorily improved and very readable. I have no further comments and look forward to seeing it published.

Back to TopTop