Next Article in Journal
A Unified Control Strategy of Distributed Generation for Grid-Connected and Islanded Operation Conditions Using an Artificial Neural Network
Previous Article in Journal
Graduate Students’ Behavioral Intention towards Social Entrepreneurship: Role of Social Vision, Innovativeness, Social Proactiveness, and Risk Taking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework for Evaluating Agricultural Ontologies

1
Department of Industrial Engineering and Management, Ariel University, Ariel 4077634, Israel
2
Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Be’er-Sheva 8410501, Israel
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(11), 6387; https://doi.org/10.3390/su13116387
Submission received: 7 May 2021 / Revised: 26 May 2021 / Accepted: 1 June 2021 / Published: 4 June 2021

Abstract

:
An ontology is a formal representation of domain knowledge, which can be interpreted by machines. In recent years, ontologies have become a major tool for domain knowledge representation and a core component of many knowledge management systems, decision-support systems and other intelligent systems, inter alia, in the context of agriculture. A review of the existing literature on agricultural ontologies, however, reveals that most of the studies, which propose agricultural ontologies, are lacking an explicit evaluation procedure. This is undesired because without well-structured evaluation processes, it is difficult to consider the value of ontologies to research and practice. Moreover, it is difficult to rely on such ontologies and share them on the Semantic Web or between semantic-aware applications. With the growing number of ontology-based agricultural systems and the increasing popularity of the Semantic Web, it becomes essential that such evaluation methods are applied during the ontology development process. Our work contributes to the literature on agricultural ontologies by presenting a framework that guides the selection of suitable evaluation methods, which seems to be missing from most existing studies on agricultural ontologies. The framework supports the matching of appropriate evaluation methods for a given ontology based on the ontology’s purpose.

1. Introduction

An ontology is a formal representation of domain knowledge, which can be interpreted by machines [1]. In other words, ontologies formally define the entities (concepts) of a domain, their attributes and the relationships among them, in a machine interpretable way.
Ontologies can be used for several purposes: First, an ontology can be used by machines for knowledge deduction [2]. This makes ontologies suitable for serving as the underlying knowledge base of decision support systems (DSS) and expert systems. Second, ontologies enable sharing conceptual schemata of data, allowing software applications to interoperate without having to share data structures [2,3,4]. Third, ontologies enable reuse of domain knowledge [5,6]. Once an ontology is published, it can be used by various applications in various domains. The web-based application and the experiment parts of a domain can be integrated to create one large ontology of the domain [5]. This, of course, may require mappings between concepts of the different ontologies to bridge over different perspectives of the integrated knowledge models. A pivotal use of ontologies is for the creation of the so-called Semantic Web. The Semantic Web (also known as Web of Data or Web of Linked Data) is an extension of the Web through standards by the World Wide Web Consortium (W3C) with the aim of providing a formal representation of the information on the World Wide Web, to facilitate sharing and reusing of data on the web, making data in Web documents understandable for machines [2,7]. Thus, ontologies have become a major tool for domain knowledge representation and a core component of many knowledge management systems, decision support systems and other intelligent systems [1,5,8,9,10,11,12,13,14,15,16].
The increasing use of ontologies is also seen in agriculture (e.g., [4,6,17,18,19,20,21,22,23,24,25,26]), where they are used for various purposes, such as agriculture knowledge sharing across farmers around the world and in different languages [18,25,27,28,29], creating semantic interoperability of agricultural systems [4,22,30,31], and supporting farmer decisions [32] by providing automatic knowledge inference. This is not surprising, given that agriculture is a knowledge-centric field that covers many areas of expertise and many world-wide used practices and technologies. Agriculture also includes numerous concepts that are often designated by different names with similar meanings [20,33,34], fragmented across different systems [35]. Furthermore, the ability to integrate and harmonize large amounts of agricultural information, originating from a wide range of sources and in various formats, has been recently identified as a key perquisite for sustainable agriculture [36]. All of these characteristics of agricultural knowledge emphasize the need to utilize ontologies in agriculture.
A review of the existing literature on agricultural ontologies, however, reveals that most of the studies, which propose agricultural ontologies, are lacking a clear ontology construction method and, more importantly, explicit evaluation procedures. This is undesired because without well-structured construction and evaluation processes, it is difficult to consider the value of ontologies to research and practice. Moreover, it is difficult to rely on such ontologies and share them on the Semantic Web or between semantic-aware applications. Table 1 summarizes the goals for each of the surveyed agricultural ontologies and whether their construction and evaluation processes are available. The right column in this table shows that most agricultural ontologies studies do not describe any evaluation method.
With the growing number of ontology-based agricultural systems and the increasing popularity of the Semantic Web [34], it becomes essential that such construction and evaluation methods are put forward to guide future efforts of ontology development. Such efforts are particularly important in the context of agriculture, which, as discussed above, covers a wide range of knowledge areas, concepts, sources and formats, and hence demands complex ontologies. While over the years numerous ontology construction methods have been proposed [5,13,39,40,41,42], these methods are widely different and are often not detailed enough [13]. The variety of construction methods and their inconsistencies may also lead to difficulties in the selection of proper ontology evaluation methods, further increasing the need for clear guiding rules for ontology evaluation.
In this study, we propose a framework that guides the selection of suitable evaluation methods for agricultural ontologies, based on the ontology goals, as identified in the literature summarized in Table 1. The use of the framework is demonstrated on the case of a pest-control ontology. Our work contributes to the existing literature on agricultural ontologies by presenting a comprehensive framework to facilitate ontology evaluation, which, as shown by our literature review, is currently missing from most studies. Using the proposed framework, ontology developers can easily match suitable evaluation methods to their agricultural ontologies.
The rest of the paper is organized as follows: Next, background on ontologies and the Semantic Web is provided. In the Materials and Methods section, we discuss the research steps and present a review of ontology evaluation methods. In the Results section, we survey the purposes for which ontologies are used in the context of agriculture, present the framework for matching evaluation methods based on the ontology purpose, and finally, demonstrate how the framework should be applied in a case study of pest-control ontology. In the last section, we discuss the resulting framework and provide conclusions.

2. Materials and Methods

The development of the evaluation framework for agricultural ontologies included three steps: First, literature on existing ontology evaluation approaches (not necessarily in the context of agriculture) was surveyed. Second, given the goals or uses of ontologies in the context of agriculture which were identified in the Introduction section, we define a framework that matches evaluation approaches to different ontology purposes. Third, we demonstrate the application of the framework, using a case-study of a pest-control ontology, which has been developed for integrating knowledge from different websites, as well as concepts from other ontologies, in order to provide a pest-control knowledge base. Specifically, the pest-control ontology is aimed at supporting pest-control decisions of farmers, as it serves as the knowledge base of a pest-control decision support application.
The ontology can be downloaded from pesticidesontology.com and its goals as a development process have been published [43].

A Review of Ontology Evaluation Methods

Various ontology evaluation methods have been proposed in the literature. These methods are commonly classified into the following types [44,45]:
  • Evaluation against a gold standard—This method compares an ontology with common standards or with another ontology that is considered as a benchmark. Such a method is typically used in cases where the ontology was automatically or semi-automatically generated. In many cases, the application of this method is impossible since such a gold standard does not exist.
  • Application-based evaluation—The application-based evaluation (or task-based evaluation [45]) uses the ontology for completing tasks within an application and measure its effectiveness. Since comparing several optional ontologies in the context of a given application environment is usually not feasible, often the proposed ontology is evaluated in a quantitative or qualitative manner by measuring its suitability for performing tasks within the application. The advantage of this method is that it allows assessing how well the ontology fulfils its objectives. Nevertheless, the evaluation is only relevant for that particular application. If the ontology is to be used in another application, the evaluation is irrelevant.
  • Criteria-based evaluation—This method evaluates the ontology against a set of predefined criteria. Depending on the criteria, the evaluation is conducted either automatically [45] or manually, usually by experts [9]. Various criteria have been used in the literature. Gruber [40] defines five criteria: Clarity—the ontology should effectively and objectively communicate the definitions of terms; Coherence—the ontology should support inferences that are consistent with the definitions and have no contradictions; Extendibility—it should be possible to extend the ontology to support possible uses of the shared vocabulary, without altering the ontology; Minimal encoding bias—the conceptualization should be as independent of the particular encoding being used as possible; and Minimal ontological commitment—the ontology should define as few restrictions on the domain of discourse as possible. Gómez-Pérez [46] also defines five criteria that are partially overlapping with Gruber’s criteria: Consistency and Expendability, which are similar to Gruber’s coherence and extendibility, respectively; Conciseness—definitions should be clear and unambiguous, yet expressed in few words; Completeness—the ontology captures all that is known about the real world in a finite structure; and Sensitiveness—how sensitive the ontology is to small changes in a given definition. Some criteria (e.g., expendability, clarity and completeness) are difficult to evaluate and require manual (and subjective) inspection by domain experts or ontology engineers [45]. Other criteria can be measured quantitatively by various measures. For example, consistency can be measured based on the number of circularity errors, partition errors and semantic inconsistency errors; and conciseness can be measured based on the number of redundancy errors, grammatical redundancy errors and number of identical formal definitions of classes [47]. The selection of appropriate criteria and measures for the evaluation of an ontology depends on the ontology requirements and goals, as demonstrated by [45].
  • Data-driven evaluation—In this evaluation method, the ontology is compared with relevant sources of data (e.g., documents, dictionaries, etc.) about the domain of discourse. For example, Brewster et al. [48] use such a method to determine the degree of structural fit between an ontology and a corpus of documents.
To the above classification of methods, Brank et al. [44] add another dimension of classification—one that is based on the level of evaluation. They argue that an ontology can be evaluated on six different levels, representing different ontology aspects, as follows. First, the lexical, vocabulary, or data level, in which the concepts, facts or instances that are included in the ontology are evaluated, usually by comparing them with different domain-related data sources, as well as with string similarities techniques (e.g., [49]). Second, the hierarchy or taxonomy level, in which we evaluate whether an appropriate hierarchy of ‘is-a’ relations between concepts is defined (e.g., [45]). Third, other semantic relations, in which semantic relations, besides ‘is-a’, are evaluated. Fourth, the context level, in which an ontology may be part of a larger collection of ontologies and may reference or be referenced by definitions in these other ontologies. In this level, we evaluate these inter-ontology references. The context of an ontology can also be an application that uses the ontology. If this is the case, we evaluate how effective the ontology is with respect to the achievement of application goals. Fifth, the syntactic level, in which we evaluate whether the ontology is correctly specified and follows the syntactic requirements of the selected formal language. This level is relevant when the ontology is created manually. Many of the existing ontology editors (e.g., Protégé) include automatic mechanisms for syntactic evaluation. Finally, the structure, architecture, and design level, in which we evaluate whether the ontology satisfies pre-defined design principles or criteria, structural concerns, and whether it is suitable for further development.
Brank et al. [44] map between the four evaluation methods (i.e., gold standard, application-based, criteria-based and data-driven) and the above levels. For example, while all methods are suitable for evaluating the lexical, vocabulary, or data level, the hierarchy or taxonomy level, and the other semantic relations level, only the application-based and criteria-based methods are suitable for evaluating the context level. In addition, only the gold-standard and criteria-based methods are suitable for evaluating the syntactic level, and only the criteria-based method is suitable for evaluating the structure, architecture, and design level.
The framework proposed in this study bridges over the above-mentioned evaluation methods and the different levels of evaluation, for different purposes of agricultural ontologies.

3. Results

3.1. Proposed Framework for Agricultural Ontology Evaluation

To facilitate the selection of suitable evaluation methods, we propose a framework that links between the ontology purpose, the ontology aspects (levels), and the appropriate evaluation method [44,45]. The framework recommends appropriate evaluation methods based on the purpose of the ontology.
Our literature review reveals that in the context of agriculture, ontologies are used for four main purposes: As means to share vocabularies and integrate data (e.g., [6,18]), for knowledge search and exploration [17,18,19,21,25,33], for system interoperability (e.g., [20,22,30]), and for decision support and automation (e.g., [32,50]). The evaluation of ontologies with these different purposes requires focusing on different ontology aspects, i.e., it calls for the evaluation of different ontology levels: the lexical, vocabulary, or data level, the hierarchy or taxonomy level, the semantic relations level (other than hierarchical), and the context level [44]. Each of these levels requires the employment of different evaluation methods.
The selection of appropriate evaluation methods for a given ontology should account for the aspects or levels of the ontology that need to be evaluated, given the ontology’s purpose. Evaluation of the hierarchy or taxonomy level, assuring that hierarchical ‘is-a’ relations between concepts are correct, is particularly important in ontologies that are aimed at supporting knowledge search and exploration in order to foster efficient search of knowledge. Such evaluation can be accomplished using any of the evaluation methods (gold-standard, application-based, criteria-based, or data-driven) [44]. However, particularly suitable for such evaluation are the gold-standard method, in which the taxonomy is compared to a common standard or to another ontology that serves as a benchmark (assuming one exists), and the criteria-based method (e.g., [51]), in which the backbone taxonomy relationships are examined against various criteria.
The evaluation of the context level ensures that the ontology correctly relates to other ontologies or to the particular context in which it is used. Such an evaluation is particularly important in ontologies that aim at decision support, to measure the effectiveness of the ontology-based DSS, and in ontologies that aim at integration of data from different ontologies, to ensure that references between ontologies are correct. This can be attained using an application-based method, in which the effectiveness of the ontology is examined within the application where it is used, or using experts’ assessments of predefined criteria [44].
Evaluation of the semantic relations level is particularly important in ontologies that aim at creating shared vocabularies and integrating data (or ontologies) to create system interoperability, since these ontologies require agreement on the meaning of things. Such evaluation can be attained by each of the evaluation methods.
Syntactic level evaluation is important for any ontology, as correct syntactic specification is mandatory for the ontology to be used by information systems, computerized agents and the Semantic Web. Likewise, since the building blocks of any ontology are the concepts, instances, or facts that form it, evaluation of the lexical level, i.e., the vocabulary used to represent these concepts and facts, is important for any ontology. Evaluation at this level usually involves data-driven evaluation methods in which the ontology is compared with domain-specific documents, dictionaries, and other relevant sources.
The framework recommendations are summarized in Figure 1. Each quadrant of Figure 1 links a particular ontology purpose to ontology levels that are important for evaluation, which are linked to suitable evaluation methods. The links to and from each level are marked with a unique color and line type: links through the lexical, vocabulary, or data level appear in black, links through other semantic relations level appear in red, links through the context level appear in blue, and links through the syntactic level appear in light blue. When several methods are suitable, a thicker link to a method indicates that the method is more suitable.
For example, the top-right quadrant focuses on the purpose of knowledge search and exploration. Ontologies focused on this purpose require the evaluation of (1) the hierarchy or taxonomy level, which can be addressed by all evaluation methods, and especially by the gold-standard and criteria-based methods (to which the lines are thicker); (2) the lexical, vocabulary, or data level, which can be addressed by all evaluation methods; and (3) the syntactic level, which can be addressed by the gold-standard and criteria-based methods.
To complement the framework, we propose an iterative evaluation process, wherein we identify the ontology purposes. For each purpose, different ontology levels are selected for evaluation, and for each level, suitable evaluation methods are applied. The process is depicted in Figure 2.

3.2. Framework Application: The Case of a Pest-Control Ontology

We demonstrate the application of the framework through a case study of a pest-control ontology evaluation. In what follows, the evaluation steps, including purpose identification, selection of suitable levels of evaluation, and selection of corresponding evaluation methods, are demonstrated.
  • Step 1: Identifying purposes
The proposed pest-control ontology is intended to serve two purposes: (1) integration of knowledge from different websites and concepts from other ontologies in order to provide a pest-control knowledge base; and (2) support of pest-control decisions.
  • Step 2: Selecting levels of evaluation
The integrative role of the ontology requires evaluating the lexical level (to ensure that the vocabulary used by the ontology is sufficient), the semantic relations level (to ensure that there are no semantic ambiguities among concepts from different sources), and the syntactic level, as shown in the upper left quadrant of Figure 1. The decision support role of the ontology requires, in addition to the lexical and syntactic levels, the evaluation of the context level.
While the hierarchy level may also be important for the evaluation of ontologies aimed at facilitating knowledge search, our ontology does not specify crop and pest hierarchies (e.g., categorization of crops to families) but relies on the hierarchies of AGROVOC [27]. AGROVOC is a multilingual thesaurus maintained by the Food and Agriculture Organization (FAO) of the United Nations, which is now published as linked data on the Semantic Web; it includes definitions and properties of pests, crops, and chemicals in 17 different languages. As a result, we do not evaluate the hierarchy level.
  • Step 3: Selecting evaluation methods
For each level of evaluation, corresponding evaluation methods are selected. The evaluation of the syntactic level is easily accomplished in our case: Protégé [5], the ontology editor used for developing the pest-control ontology, provides automatic syntax checking for OWL ontologies. Nevertheless, if an ontology is not developed using a tool that provides automatic syntax checking, such as Protégé, the syntactic level can be evaluated using a comparison to a gold-standard, if one exists, or using criteria-based evaluation, to ensure that the ontology matches the syntactic requirements of the language.
To evaluate the semantic relations level, criteria-based evaluation is often used (e.g., [9,45]). Among the criteria specified above in the Materials and Methods section, we consider the following as relevant for evaluating the semantic relations level:
Clarity—this criterion is satisfied if the ontology definitions are objective and, when possible, formalized. Furthermore, all definitions should be documented in natural language. In the case of the pest-control ontology, the terminology was created on the basis of existing sources, specifically online documents by the Plant Protection and Inspection Services (PPIS), representing a common professional terminology [43]. Therefore, it can be argued that the definitions used are objective. Moreover, each concept was defined in natural language and was verified by domain experts.
Coherence—this criterion is satisfied if all axioms are logically consistent and if informal concept definitions (in natural language) do not contradict the axioms [40]. The pest-control ontology used the Protégé-OWL reasoner to ensure logical consistency of ontology axioms. Furthermore, domain experts confirmed that the definitions are consistent with the formal axioms [43].
Minimum encoding bias—this criterion implies that knowledge representation should not be affected by symbol-level encoding due to convenience of implementation [40]. The pest-control ontology was specified in OWL, which is expressive enough such that knowledge representation choices are not limited or affected by symbol encoding and there is no encoding bias.
Conciseness—conciseness is satisfied if the ontology does not include any unnecessary definition or explicit redundancies among definitions [10,47]. We overviewed the schema and verified that it includes no unnecessary definitions. Additionally, we confirmed that the schema does not include concepts that already appear in AGROVOC and Dbpedia, but rather includes references to these concepts. For example, the ontology includes Hebrew definitions of pests and links them to the corresponding concepts in AGROVOC. In this way, the ontology is extended with additional relevant information on these pests (e.g., their family, species, and names in other languages).
Completeness—completeness is satisfied if the ontology captures all that is known about the real world. Studies [10,46] propose to check completeness by showing that all competency questions (i.e., predefined questions that should be answered by the ontology) can be answered using the ontology. Others [43] show that all specified competency questions can be answered by the ontology. The above notwithstanding, the ontology may be considered as incomplete with respect to future possible uses (beyond what is defined by the competency questions).
While a gold-standard approach may also be applicable for evaluating the Semantic relations level, to the best of our knowledge, there are no relevant gold standards or other pest-control ontologies available.
To evaluate the Context level, we examine the usability of the ontology and the effectiveness of the system that uses the ontology [44]. In our context, the goals of evaluation are: (1) validate the usability of the ontology for supporting pesticide-usage decisions; (2) evaluate the extent to which using the ontology-based application improves user performance. To address the first goal, we developed a prototypical Web-based application for supporting pesticide-usage decisions that is based on the proposed pest-control ontology. This application supports farmers’ decisions, such as selecting the pesticide that requires the least number of waiting days for different markets (local and global), or finding whether a crop treated with a given pesticide can be exported to a particular country. To address the second goal and evaluate the effectiveness of the ontology-based application, we performed an experiment, in which participants were given two simple tasks that required retrieving information on pesticide application regulation: (1) find the pesticide with the lowest number of waiting days for the local market and for export that is suitable for treating pepper infested with Aphid (plant lice); and (2) find whether pepper with a maximum residue limit (MRL) of 0.3 for the chemical Pymetrozine can be exported to the USA. The experiments measured the increased effectiveness of using the Web-based application in accomplishing these tasks. The results showed that when using the application, all participants were able to complete both tasks in less than 1.5 min, compared to more than 5 min without the application (in fact, half of the participants were unable to complete the tasks without the application). The web-based application and the experiment are described in more detail in a previous paper [43].
To evaluate the lexical level, comparisons with various sources of data are usually performed [44]. Such evaluation would be redundant in our case, as the ontology was built based on existing online data sources. It should be noted that the criteria-based evaluation used above for the semantic relations level is also relevant for the evaluation of the lexical level.

4. Conclusions

Ontologies are a powerful tool for representing domain knowledge. Consequently, a growing number of knowledge management systems and DSS in the context of agriculture are based on ontologies. In this paper, a framework for agricultural ontology evaluation is presented. The framework combines ideas from pivotal existing evaluation methods [44,45] and matches specific evaluation methods to the purpose of the agricultural ontology. The applicability and ease of use of the framework is then demonstrated on the case of a pest-control ontology.
Clear and structured methods for ontology evaluation are highly important; without such methods, it is difficult to consider an ontology as a contribution to research and practice. With the growing use of ontologies and the Semantic Web for developing agricultural systems, such methods become increasingly critical. Furthermore, as revealed by our literature review, most of the studies that develop agricultural ontologies do not present the method of construction and do not even discuss how the developed ontologies were evaluated, making it difficult to share and reuse them. Our work thus narrows this gap in the literature on agricultural ontologies by proposing a framework that facilitates the selection of evaluation methods suitable to the ontology purpose. This framework can guide the evaluation of ontologies developed in other studies.
This study is not without limitations, which open new directions for future research. Specifically, as the applicability of the framework is demonstrated on the case of a pest-control ontology, it focuses on evaluation levels and their respective evaluation methods that are more relevant to the characteristics and goals of this particular ontology. For example, the pest-control ontology does not require evaluation of the hierarchy level. In addition, the ontology was developed using Protégé, which obviates the need for evaluating the syntactic level (although, we do refer to cases where the ontology is manually defined and where hierarchy level evaluation is required). Therefore, it can be beneficial in future research to apply the framework to evaluate additional agricultural ontologies. Reciprocally, such applications of the framework can generate insights into how to further refine the framework and increase its usability and effectiveness, thereby advancing ontology research and practice.

Author Contributions

Conceptualization, A.G.; Methodology, A.G. and G.R.; Formal Analysis, A.G.; Investigation, A.G.; Writing—Original Draft Preparation, A.G.; Writing—Review & Editing, L.F. and G.R.; Supervision, G.R. and L.F.; Project Administration, G.R.; Funding Acquisition, G.R. and L.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Agriculture and Rural Development grant number 458-0603/l3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chandrasekaran, B.; Josephson, J.R.; Benjamins, V.R. What are ontologies, and why do we need them? IEEE Intell. Syst. 1999, 14, 20–26. [Google Scholar] [CrossRef] [Green Version]
  2. W3C. Ontology Driven Architecture and Potential Uses of the Semantic Web in Systems and Software Engineering. 2006. Available online: http://www.w3.org/2001/sw/BestPractices/SE/ODA/ (accessed on 3 June 2021).
  3. Gruber, T.R. A translation approach to portable ontology specifications. Knowl. Acquis. 1993, 5, 199–220. [Google Scholar] [CrossRef]
  4. Chukkapalli, S.S.L.; Mittal, S.; Gupta, M.; Abdelsalam, M.; Joshi, A.; Sandhu, R.; Joshi, K. Ontologies and Artificial Intelligence Systems for the Cooperative Smart Farming Ecosystem. IEEE Access 2020, 8, 164045–164064. [Google Scholar] [CrossRef]
  5. Noy, N.F.; McGuinness, D.L. Ontology Development 101: A Guide to Creating Your First Ontology; Technical Report; Knowledge Systems Laboratory. 2001. Available online: https://protege.stanford.edu/publications/ontology_development/ontology101.pdf (accessed on 3 June 2021).
  6. Roussey, C.; Soulignac, V.; Champomier, J.C.; Abt, V.; Chanet, J.P. Ontologies in agriculture. In Proceedings of the International Conference on Agricultural Engineering (AgEng 2010), Clermont-Ferrand, France, 6–8 September 2010. [Google Scholar]
  7. Berners-Lee, T.; Hendler, J.; Lassila, O. The Semantic Web: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. In The Future of the Web, 1st ed.; The Rosen Publishing Gorup: New York, NY, USA, 2007; pp. 70–80. [Google Scholar]
  8. Bose, R.; Sugumarat, V. Semantic Web Technologies for Enhancing Intelligent DSS Environments. In Decision Support for Global Enterprises; Springer: Boston, MA, USA, 2007. [Google Scholar]
  9. Delir Haghighi, P.; Burstein, F.; Zaslavsky, A.; Arbon, P. Development and evaluation of ontology for intelligent decision support in medical emergency management for mass gatherings. Decis. Support Syst. 2013, 54, 1192–1204. [Google Scholar] [CrossRef]
  10. Yu, J.; Thom, J.A.; Tam, A. Evaluating Ontology Criteria for Requirements in a Geographic Travel Domain. In On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE; Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., Pandu Rangan, C., Steffen, B., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1517–1534. ISBN 978-3-540-29738-3. [Google Scholar]
  11. Yang, J.; Lin, Y. Study on Evolution of Food Safety Status and Supervision Policy—A System Based on Quantity, Quality, and Development Safety. Sustainability 2019, 11, 6656. [Google Scholar] [CrossRef] [Green Version]
  12. Meng, X.; Xu, C.; Liu, X.; Bai, J.; Zheng, W.; Chang, H.; Chen, Z. An Ontology-Underpinned Emergency Response System for Water Pollution Accidents. Sustainability 2018, 10, 546. [Google Scholar] [CrossRef] [Green Version]
  13. Spoladore, D.; Pessot, E. Collaborative Ontology Engineering Methodologies for the Development of Decision Support Systems: Case Studies in the Healthcare Domain. Electronics 2021, 10, 1060. [Google Scholar] [CrossRef]
  14. Spoladore, D.; Mahroo, A.; Trombetta, A.; Sacco, M. DOMUS: A domestic ontology managed ubiquitous system. J. Ambient. Intell. Hum. Comput. 2021, 42, 1190. [Google Scholar] [CrossRef]
  15. Ali, F.; Ali, A.; Imran, M.; Naqvi, R.A.; Siddiqi, M.H.; Kwak, K.-S. Traffic accident detection and condition analysis based on social networking data. Accid. Anal. Prev. 2021, 151, 105973. [Google Scholar] [CrossRef]
  16. Ali, F.; El-Sappagh, S.; Islam, S.R.; Ali, A.; Attique, M.; Imran, M.; Kwak, K.-S. An intelligent healthcare monitoring framework using wearable sensors and social networking data. Future Gener. Comput. Syst. 2021, 114, 23–43. [Google Scholar] [CrossRef]
  17. Beck, H.; Kim, S.; Hagan, D. A crop-pest ontology for extension publications. In Proceedings of the EFITA/WCCA Joint Congress on IT in Agriculture, Vila Real, Portugal, 25–28 July 2005. [Google Scholar]
  18. Chang, C.; Guojian, X.; Guangda, L. Thesaurus and ontology technology for the improvement of agricultural information retrieval. In Agricultural Information and IT, Proceedings of the IAALD AFITA WCCA 2008, Tokyo University of Agriculture, Tokyo, Japan, 24–27 August 2008; Nagatsuka, T., Ninomiya, S., Eds.; International Association of Agricultural Information Specialists (IAALD): Tokyo, Japan; Asian Federation of Information Technology in Agriculture (AFITA): Tokyo, Japan, 2008; ISBN 978-4-931250-02-4. [Google Scholar]
  19. Li, D.; Kang, L.; Cheng, X.; Li, D.; Ji, L.; Wang, K.; Chen, Y. An ontology-based knowledge representation and implement method for crop cultivation standard. Math. Comput. Model. 2013, 58, 466–473. [Google Scholar] [CrossRef]
  20. Liao, J.; Li, L.; Liu, X. An integrated, ontology-based agricultural information system. Inf. Dev. 2015, 31, 150–163. [Google Scholar] [CrossRef]
  21. Song, G.; Wang, M.; Ying, X.; Yang, R.; Zhang, B. Study on Precision Agriculture Knowledge Presentation with Ontology. AASRI Procedia 2012, 3, 732–738. [Google Scholar] [CrossRef]
  22. Tomic, D.; Drenjanac, D.; Hoermann, S.; Auer, W. Experiences with creating a Precision Dairy Farming Ontology (DFO) and a Knowledge Graph for the Data Integration Platform in agriOpenLink. Agrárinform. J. Agric. Inform. 2015, 6, 115–126. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, N.; Wang, M.; Wang, N. Precision agriculture—A worldwide overview. Comput. Electron. Agric. 2002, 36, 113–132. [Google Scholar] [CrossRef]
  24. Aminu, E.F.; Oyefolahan, I.O.; Abdullahi, M.B.; Salaudeen, M.T. An OWL Based Ontology Model for Soils and Fertilizations Knowledge on Maize Crop Farming: Scenario for Developing Intelligent Systems. In Proceedings of the 15th International Conference on Electronics, Computer and Computation (ICECCO), Abuja, Nigeria, 10–12 December 2019; pp. 1–8. [Google Scholar]
  25. Su, X.-L.; Li, J.; Cui, Y.-P.; Meng, X.-X.; Wang, Y.-Q. Review on the Work of Agriculture Ontology Research Group. J. Integr. Agric. 2012, 11, 720–730. [Google Scholar] [CrossRef]
  26. Liu, X.; Bai, X.; Wang, L.; Ren, B.; Lu, S.; Lin, L. Review and Trend Analysis of Knowledge Graphs for Crop Pest and Diseases. IEEE Access 2019, 7, 62251–62264. [Google Scholar] [CrossRef]
  27. AGROVOC. AGROVOC Thesaurus. Available online: http://aims.fao.org/vest-registry/vocabularies/agrovoc-multilingual-agricultural-thesaurus (accessed on 3 June 2021).
  28. Sánchez-Alonso, S.; Sicilia, M.-A. Using an AGROVOC-based ontology for the description of learning resources on organic agriculture. In Metadata and Semantics; Sicilia, M.-A., Lytras, M.D., Eds.; Springer: Boston, MA, USA, 2009; pp. 481–492. ISBN 978-0-387-77744-3. [Google Scholar]
  29. Alfred, R.; Chin, K.O.; Anthony, P.; San, P.W.; Im, T.L.; Leong, L.C.; Soon, G.K. Ontology-Based Query Expansion for Supporting Information Retrieval in Agriculture. In The 8th International Conference on Knowledge Management in Organizations; Uden, L., Wang, L.S.L., Corchado Rodríguez, J.M., Yang, H.-C., Ting, I.-H., Eds.; Springer: Dordrecht, The Netherlands, 2014; pp. 299–311. ISBN 978-94-007-7286-1. [Google Scholar]
  30. Aqeel-ur, R.; Zubair, S.A. ONTAgri: Scalable Service Oriented Agriculture Ontology for Precision Farming. In Proceedings of the International Conference on Agricultural and Biosystems Engineering (ICABE 2011), Hong Kong, China, 20 February 2011. [Google Scholar]
  31. Goumopoulos, C.; Kameas, A.D.; Cassells, A. An ontology-driven system architecture for precision agriculture applications. Int. J. Metadata Semant. Ontol. 2009, 4, 72–84. [Google Scholar] [CrossRef]
  32. Gaire, R.; Lefort, L.; Compton, M.; Falzon, G.; Lamb, D.; Taylor, K. Demonstration: Semantic Web Enabled Smart Farm with GSN. In Proceedings of the International Semantic Web Conference (Posters & Demos), Sydney, Australia, 21–25 October 2013. [Google Scholar]
  33. Palavitsinis, N.; Manouselis, N. Agricultural Knowledge Organization Systems: An Analysis of an Indicative Sample. In Handbook of Metadata, Semantics and Ontologies; Sicilia, M.-A., Ed.; World Scientific: London, UK, 2014; pp. 279–296. ISBN 978-981-283-629-8. [Google Scholar]
  34. Arnaud, E.; Laporte, M.-A.; Kim, S.; Aubert, C.; Leonelli, S.; Miro, B.; Cooper, L.; Jaiswal, P.; Kruseman, G.; Shrestha, R.; et al. The Ontologies Community of Practice: A CGIAR Initiative for Big Data in Agrifood Systems. Patterns 2020, 1, 100105. [Google Scholar] [CrossRef]
  35. Antle, J.M.; Basso, B.; Conant, R.T.; Godfray, H.C.J.; Jones, J.W.; Herrero, M.; Howitt, R.E.; Keating, B.A.; Munoz-Carpena, R.; Rosenzweig, C.; et al. Towards a new generation of agricultural system data, models and knowledge products: Design and improvement. Agric. Syst. 2017, 155, 255–268. [Google Scholar] [CrossRef]
  36. Valin, M. Connecting Data Sources for Sustainability. 2021. Available online: https://www.precisionag.com/sponsor/skyward/connecting-data-sources-for-sustainability/ (accessed on 26 May 2021).
  37. Visoli, M.; Ternes, S.; Pinet, F.; Chanet, J.P.; Miralles, A.; Bernard, S.; de Sousa, G. Computational architecture of OTAG project. EFITA 2009, 2009, 165–172. [Google Scholar]
  38. Fonseca, F.T.; Egenhofer, M.J.; Davis, C.A.; Borges, K.A.V. Ontologies and knowledge sharing in urban GIS. Comput. Environ. Urban Syst. 2000, 24, 251–272. [Google Scholar] [CrossRef]
  39. Grüninger, M.; Fox, M.S. Methodology for the Design and Evaluation of Ontologies. In Workshop on Basic Ontological Issues in Knowledge Sharing; International Joint Conference on AI (IJCAI-95): Montreal, QC, Canada, 1995. [Google Scholar]
  40. Gruber, T.R. Toward principles for the design of ontologies used for knowledge sharing? Int. J. Hum. Comput. Stud. 1995, 43, 907–928. [Google Scholar] [CrossRef]
  41. Pinet, F.; Ventadour, P.; Brun, T.; Papajorgji, P.; Roussey, C.; Vigier, F. Using UML for Ontology Construction: A Case Study in Agriculture. In Seventh Agricultural Ontology Service (AOS) Workshop on “Ontology-Based Knowledge Discovery: Using Metadata and Ontologies for Improving Access to Agricutural Information”; Macmillan Edition, India Ltd.: Bangalore, India, 2006; pp. 735–739. [Google Scholar]
  42. Uschold, M.; King, M. Towards a Methodology for Building Ontologies. In Workshop on Basic Ontological Issues in Knowledge Sharing, Held in Conjunction with IJCAI-95; The University of Edinburgh: Edinburgh, UK, 1995. [Google Scholar]
  43. Goldstein, A.; Fink, L.; Raphaeli, O.; Hetzroni, A.; Ravid, G. Addressing the ‘Tower of Babel’ of pesticide regulations: An ontology for supporting pest-control decisions. J. Agric. Sci. 2019, 157, 493–503. [Google Scholar] [CrossRef]
  44. Brank, J.; Grobelnik, M.; Mladenić, D. A survey of ontology evaluation techniques. In Proceedings of the Conference on Data Mining and Data Warehouses (SiKDD 2005), Citeseer Ljubljana, Slovenia, 17 October 2005. [Google Scholar]
  45. Yu, J.; Thom, J.A.; Tam, A. Ontology evaluation using wikipedia categories for browsing. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, Lisbon, Portugal, 6–10 November 2007. [Google Scholar]
  46. Gómez-Pérez, A. Towards a framework to verify knowledge sharing technology. Expert Syst. Appl. 1996, 11, 519–529. [Google Scholar] [CrossRef] [Green Version]
  47. Gómez-Pérez, A. Evaluation of ontologies. Int. J. Intell. Syst. 2001, 16, 391–409. [Google Scholar] [CrossRef]
  48. Brewster, C.; Alani, H.; Dasmahapatra, S.; Wilks, Y. Data driven ontology evaluation. In Proceedings of the International Conference on Language Resources and Evaluation, Lisbon, Portugal, 26–28 May 2004. [Google Scholar]
  49. Maedche, A.; Staab, S. Measuring Similarity between Ontologies. In Proceedings of the European Conference on Knowledge Acquisition and Management, Sigüenza, Spain, 1–4 October 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 251–263. [Google Scholar]
  50. Bournaris, T.; Papathanasiou, J. A DSS for planning the agricultural production. IJBIR 2012, 6, 117. [Google Scholar] [CrossRef]
  51. Guarino, N.; Welty, C. Evaluating ontological decisions with OntoClean. Commun. ACM 2002, 45. [Google Scholar] [CrossRef]
Figure 1. A framework for evaluation method selection. Each quadrant focuses on a single agricultural ontology purpose (Goals from Table 1) and depicts its relevant levels of evaluation (each in a different color) and the evaluation methods they require. In cases where several methods apply, the thickness of the arrows indicates the relative importance of the method for that level of evaluation.
Figure 1. A framework for evaluation method selection. Each quadrant focuses on a single agricultural ontology purpose (Goals from Table 1) and depicts its relevant levels of evaluation (each in a different color) and the evaluation methods they require. In cases where several methods apply, the thickness of the arrows indicates the relative importance of the method for that level of evaluation.
Sustainability 13 06387 g001
Figure 2. Evaluation process for using the ontology evaluation framework.
Figure 2. Evaluation process for using the ontology evaluation framework.
Sustainability 13 06387 g002
Table 1. A summary of agricultural ontologies, their goals, and whether their construction and evaluation processes are described.
Table 1. A summary of agricultural ontologies, their goals, and whether their construction and evaluation processes are described.
GoalShare
Vocabularies,
Integrate Data 1
Knowledge Search and
Exploration 2
System
Interoperability 3
Decision Support 4Construction Method
Described 5
Evaluation Method
Described 6
Study
OTAG [6,37]
Urban GIS [6,38] partial
Chinese Agricultural Thesaurus [18]
PLANTS [31] ++
ONTAgri [30]
Smart Farm with GSN [32]
Precision Dairy Farming [22] +
Crop Cultivation Standards [19] +
Precision Agriculture Knowledge [21]
Ontology-based AIS [20] Partial+
Crop-pest Ontology [17] +
Organic Agriculture Learning [28] Partial
Soil and Fertilization [24] ++
Crop Pest and Disease [26] +Partial
1 Sharing common understanding of the structure of information among people or software [5]; 2 Supporting the sharing of agricultural knowledge (e.g., best practices) and enabling people to search and explore it; 3 Enabling agricultural information systems and smart machinery to communicate and exchange information; 4 Supporting agricultural decisions; 5 Including a description of the ontology construction method; 6 Including a description of the ontology evaluation method.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Goldstein, A.; Fink, L.; Ravid, G. A Framework for Evaluating Agricultural Ontologies. Sustainability 2021, 13, 6387. https://doi.org/10.3390/su13116387

AMA Style

Goldstein A, Fink L, Ravid G. A Framework for Evaluating Agricultural Ontologies. Sustainability. 2021; 13(11):6387. https://doi.org/10.3390/su13116387

Chicago/Turabian Style

Goldstein, Anat, Lior Fink, and Gilad Ravid. 2021. "A Framework for Evaluating Agricultural Ontologies" Sustainability 13, no. 11: 6387. https://doi.org/10.3390/su13116387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop