An Enhanced Information Retrieval Method Based on Ontology for Bridge Inspection
Round 1
Reviewer 1 Report
The paper presents an enhanced information retrieval method based on ontology to predict target elements in bridge inspection. The work relies on Bridge Management Systems (BMS) context and aims to improve this kind of system. Ultimately, the work aims at supporting the bridge inspector in choosing the next bridge element for inspection (instead of typing the name of the element, the system could suggest the next one based on spatial relations). The paper is well written and well structured. It addresses a practical/real case scenario, which, in my opinion, is important to highlight the relevance of the addressed research problem. The authors also have conducted two validation initiatives. Regards novelty, the authors point out: (i) improvement of Seven-Step Method, based on automatic mapping technology; (ii) a new bridge inspection ontology (Biontology); (iii) a software architecture; (iv) a prototype that supports the bridge inspection (including a recommendation algorithm).
I would like to point out/discuss some aspects:
1. In my opinion, designing manually ontologies is not, itself, a problem… I refer specifically to the identification of concepts, relations, and restrictions (here, I am not making reference to the process of instantiation, which is more amenable to automatization)… Building ontology (definition of concepts, relations, and restrictions), in my opinion, requires analysis and discussions, especially by considering the ontological foundation of each element to be included in the ontology (taking as a basis, e.g., Top-level ontologies). This can avoid misunderstandings and false agreements. How do the authors see this point of view?
2. In section 3, the authors present the “framework of the ontology-based IR method”. How generic is this framework for being applied in other works? The way this framework was introduced in the paper, it sounds like the method/process used for building/designing the proposed solution, i.e., it is specially related to the steps conducted for building the proposed solution. In my opinion, it is not a problem itself, but I think could be more clear for the reader that this is the process used for building the proposed solution and not a general framework built to be applied in other similar works.
3. Based on what the authors can say that “[…] the Seven-Step method is the most popular one […]” (when referring to methods used to build ontologies)? I think they should indicate a reference/citation to support this statement.
4. Regarding the improved Seven-Step Method:
a) how useful is Step 3 for identifying concepts, relations, and restrictions? It is an artificial extracting step over books, standards, manuals, etc… Does not it produce an overload of elements (some of them unuseful)? How much time does the ontology designer spend analyzing this amount of concepts and relations for, then, including some of them in the ontology?
b) in step 4, how the class hierarchy is built automatically from the relational database, since this kind of relation (specialization/generalization) is not explicitly represented in relational databases?
c) how the “composedOf” relations are identified/defined by the automatic mapping, since in a relational database this kind of relation is not explicitly defined? In the example of Fig. 3, the label of a column is “componentID”, which could suggest some composition… But what if was the label any other, not suggesting a composition…? Could the automatic mapping identify this kind of relation?
d) I did not find in the paper how many concepts, relations, and restrictions (not the instances) the Biontology has.
e) What does the “isElementOf” relation mean? What is its semantics?
f) In Fig. 5, the individuals/instances are labeled as terms that suggest types, e.g., abutment, substructure, railing… others are labeled as 2&slab, 1&wetJoint… the latter is also labeled as the individuals in Fig. 9 and in Fig. 10. What I mean is… Do the formers really refer to individuals or to types?
5. Regarding the algorithm… Besides the spatial relation, does it consider any other type of optimization (e.g., shortest path) for suggesting elements for inspection?
6. Regarding the first case study (about construction time), I am not really convinced that automatic approaches for identifying concepts, relations, and axioms/restrictions (especially for analyzing ontological properties inherent to events, types, objects, mode, etc.) are useful/precise for building ontologies. I think the time for defining a concept does not say much about building well-founded ontologies.
7. Which kind of inferences are used over the relations for identifying the elements (especially by considering the spatial relations, e.g., whole-part relations)?
8. How useful was the ontology for improving the method? What if the algorithm was implemented over a relational database (with spatial relations) or even over another structure (e.g., a graph)? This need to be clearly described in the paper.
Author Response
Please see the attachment.
Author Response File: Author Response.docx
Reviewer 2 Report
The paper presents an IR method based on ontology to predict the target elements in a management system for bridge maintenance. The outcomes include a new bridge inspection ontology (BIontology), a new software architecture designed for integrating ontology, and a prediction algorithm to automatically recommend the target elements to inspectors in bridge inspections.
The paper's core is a case study where bridge evaluation and maintenance decision-making plays an important role in collecting bridge defect data.
The most interesting part of this work is the ontology definition because the literature demonstrates weaknesses. In the field of bridge engineering, ontology applications are still at the conceptual phase stage. Many bridge elements and the limited size of mobile devices affect the search process, which can be pretty time-consuming. So, the authors believe this approach may enhance retrieval accuracy by trying to encode the inspector’s searching intent and the contextual meaning of the keywords.
As the paper addresses a journal, I assume a rigorous approach to creating such an ontology. Many steps in this concern are pretty weak. For example, the ontology definition approach is just scratched in Table 2, which lists typical examples of basic questions and competency questions.
The findings are not surprising as the average time using the automatic mapping is, of course, lesser l than the minimum average time (37.5674 s) for the manual method.
The paper is interesting, well written and structured. Many weaknesses are on the scientific level required for a journal paper.
In my opinion, this is not an exciting paper. I recommend weak accept.
Minor:
- Table 2 appears before Table 1
Author Response
Please see the attachment.
Author Response File: Author Response.docx
Reviewer 3 Report
The paper is overall well-organized and well- written
Author Response
Point 1: The paper is overall well-organized and well- written.
Response 1: Thanks very much for taking your time to review our manuscript. We really appreciate the reviewer’s positive comment on this manuscript.
Reviewer 4 Report
I dont have any comments.
Author Response
Point 1: I dont have any comments.
Response 1: Thanks very much for taking your time to review our manuscript. We appreciate your reply.
Reviewer 5 Report
The authors have done a good job. However, there are some concerns described below.
Knowledge makes a vast ecosystem that integrates human learning, machine learning, logical inferences (deduction, induction, and abduction), experimental data, analytical results, simulations, algorithms, creative thinking, and cognitive reflections. In most cases, data and knowledge are used interchangeably, which should not be the case. The myriad proximal and distal interactions of knowledge and the abovementioned entities result in heavy computation. The computational complexity cannot be tackled if the concept of knowledge is succinctly defined. For this reason, ontological approaches are used. However, The ontological idea that the authors have used is old and esoteric. Most of the references are ten years old. The advanced concept of ontology is as follows. An ontology is a concept map. A concept map boils down to a set of propositions. Any two propositions must have at least a common concept. A concept map can contain four types of knowledge: definitional knowledge, deductive knowledge, inductive knowledge, and creative knowledge. If ontology (concept map) is not constructed based on a knowledge-aware framework, it would not be easy to retrieve and reuse the knowledge.
If the authors search the journals, such as Advanced Engineering Informatics, Journal of Industrial Information Integration, and Engineering Reports, they will find knowledge-type-aware ontology building methods. The finding can be summarized as a glace ahead.
Author Response
Please see the attachment.
Author Response File: Author Response.docx