Text-to-Ontology Mapping via Natural Language Processing with Application to Search for Relevant Ontologies in Catalysisâ€
Round 1
Reviewer 1 Report
Thank you for the invitation to review the article “Text-to-Ontology Mapping via Natural Language Processing with Application to Search for Relevant Ontologies in Catalysis”.
The paper presents a ML based approach to text-to-ontology mapping to find ontologies relevant to scientific texts. The approach is based on the interesting idea of combining ontology concepts with their vector representation based on the words embeddings and extended with the modern BERT transformer architecture. It is similar to the modern approaches for the texts similarity measurement, but applied for the appropriate ontology model search task. Five classifiers were used in the experiment.
I think that the Listings 1 with powershell code and Listing 4 as well are not relevant and could be removed as it is not illustrate the main concepts of the paper. It can be replaced with some algorithm/pipeline illustration, in my opinion.
Also I am not sure if the five ontologies used in the experiment are enough for strong results. Also the boundary cases are not covered (or covered but not indicated in part 4), when some ontologies have some common concepts, or similar concepts but with different meanings.
Author Response
Dear reviewer, thank you for Your review,
we have some answers to your notes:
- I think that the Listings 1 with powershell code and Listing 4 as well are not relevant and could be removed as it is not illustrate the main concepts of the paper. It can be replaced with some algorithm/pipeline illustration, in my opinion.
- We have removed listing 1 and modified contend of other listings to keep only interesting lines
- We have added summarization to the methodology part with a process diagram
- Also I am not sure if the five ontologies used in the experiment are enough for strong results. Also the boundary cases are not covered (or covered but not indicated in part 4), when some ontologies have some common concepts, or similar concepts but with different meanings.
- Thank you for this remark. These ontologies were picked because they are expected to contain classes describing the domain of knowledge presented in the papers. The boundary cases where some ontologies contain common concepts are covered because the whole ontology is classified.
Best regards
Authors of the article
Reviewer 2 Report
This work explores methods of matching texts to ontologies. There are some issues:
- Citation formats can be improved, which brackets ([]) should be behind the end of the sentences but before the period (./,). [Line 22, Line 39]
- Listing 1 is irrelevant to the algorithm and may be removed.
- Listing 2 is not clear. Better to use a piece of pseudo-code. Other listings are similar.
- Table 2 is not clear. Which is the text and which is the ontology?
- In Fig 5 and 6, why the row sum is not one? Why there is an extra column on the left in Fig 5?
- Figure 2 does not provide useful information.
- Why hypothesis are listed in the experiments rather than in the method section?
- Can you provide some predictions, for example, to allow readers to understand what kind of texts can be matched to certain ontology?
Author Response
- Citation formats can be improved, which brackets ([]) should be behind the end of the sentences but before the period (./,). [Line 22, Line 39]
- We have repaired the inconsistency
- Listing 1 is irrelevant to the algorithm and may be removed.
- We have removed it
- Listing 2 is not clear. Better to use a piece of pseudo-code. Other listings are similar.
- We have removed non-mandatory lines and modified to be self-descripted
- Table 2 is not clear. Which is the text and which is the ontology?
- We have added a heading to the table
- In Fig 5 and 6, why the row sum is not one? Why there is an extra column on the left in Fig 5?
- We have repaired this issue
- Figure 2 does not provide useful information.
- We have changed it
- Why hypotheses are listed in the experiments rather than in the method section?
- Thank you for the remark, we tried to clarify the hypotheses by mentioning them earlier. However, we think that the placement of the hypotheses in the experiments is best in order to properly understand them.
- Can you provide some predictions, for example, to allow readers to understand what kind of texts can be matched to certain ontology?
- Some examples are provided in table 2, there are presented examples of sentences and the most similar node from ontologies by a most similar description.