Context-Awareness is a key feature in recent information systems. It was developed from ubiquitous computing scenarios, as described by Mark Weiser [1
]. Context-awareness is used in many fields in computer science as recommendation systems or human-computer interaction studies. Context-awareness may be defined as a set of applications or architectures that adapt and customize their execution based on information obtained from the environment [2
]. In ubiquitous architectures, context-awareness may directly interfere in several operations, among them [3
]: (i) Abstraction of Context Information; (ii) Association of Contexts with Data; (iii) Context-Based Discovery of Resources; (iv) Context-Based Actions; and (v) Context-Based Services Selection.
Recent research [4
] demonstrated that context modeling based on ontologies is the approach that best meets important requirements for ubiquitous systems, such as extensibility and the level of formality required to represent contexts. Based on the context modeling developed in ontologies, tools and methodologies have been created to persist and recover this information in ubiquitous systems [7
]. Ubiquitous systems uses many information sources, including ontologies and databases, to retrieve content related to the user [10
]. However, not all the information considered in ubiquitous systems is represented in ontologies, especially with respect to domain-specific information, which is frequently represented and persisted in relational databases were non-ubiquitous information systems were modeled [11
]. Therefore to retrieve this domain data in a contextualized way it is necessary to create a form of interconnection between context information, represented in ontologies, and information about the domain of the application, stored in relational databases [11
]. Some recent research proposed formalisms and tools [11
] to provide CAR (Context-Aware Retrieval). CAR tools have been proposed in two different approaches: (i) integrate context-awareness in legacy systems to retrieve content; and (ii) integrate context-awareness in the modeling phase of information systems. However, there are no proposals of specific methodologies for the application of these formalisms as proposed in CAR models.
This paper presents the UPcaD (Unified Process for Integration between Context-Awareness and Domain
) methodology. This methodology was defined based on a well known software engineering methodology (UP—Unified Process [14
]) and in an ontology engineering methodology (UPON—Unified Process for Ontology Engineering [15
]). Many of the processes used in these methodologies were reused, making the understanding of the methodology easy to the actors involved in the methodology application. The main objective of UPCaD is to guide, through a set of processes, the implementation of CAR models to integrate domain-specific data and context modeling.
The UPCaD methodology reuses the UP definitions of phases. UP is applied in software engineering and the focus of UPCaD is in the integration between context-awareness and domain data. UPON methodology extends UP for application in the ontology engineering field. The UPCaD methodology reuse a set of process originally defined in UPON in one of its workflows (Context Workflow).
To evaluate the proposed methodology, we applied it in a previously validated ontology network [16
] that describes context information about students and content in a real MOOC (Massive Open On-line Course) about Motivational Interviewing using a set of formalisms to integrate context information and domain information [13
]. These formalisms were defined as a set of linking rules, which are used in the workflows of the UPCaD. All the other formalisms of the integration (except the linking rules) are defined in this work.
The paper is structured as follows: In Section 2
the main concepts related to the modeling of context in context-awareness applications and recent research related to integration between context and domain information are presented. In Section 3
the research motivational real-world scenario is described. In Section 4
we present UPCaD methodology and the formalisms and processes that integrates the methodology. In Section 5
the evaluation of the methodology is presented with the application of the methodology in a legacy system, and a comparison of characteristics between UPCaD and related work is described. In Section 6
finally the conclusions and a a list future work possibilities are presented.
3. Research Motivation—Recommendation of Resources in Smart Universities
Education is evolving constantly in relation to teaching processes. Recently, technologies such as mobile computing have contributed to a greater dissemination of information and educational support materials [34
]. As there is a large information volume and materials available to teachers and students, it is necessary the addition of filters in the queries on this information. This procedure will restrict the results to the student’s field of study and to the context at the query time [34
Currently, teachers assemble and distribute materials in Virtual Learning Environments (VLEs). These materials are distributed and activities are performed in these environments by students always in the same way, disregarding the importance of the context variables involved in performing these tasks. This is know as the model “one size fits all”. The problem is accentuated in MOOCs, mainly because there are larger variations in relation to cultural and geographical issues when compared to a traditional distance course. This variety of factors directly influence the low completion rate in these courses, which has been 5 to 9% [35
]. Thus, it is necessary that contextual information be taken into account in the way these materials are distributed to the students of these courses [36
Recent work [37
] presented intelligent university environments, which are composed by a set of resources and intelligent systems that manages these resources and recommend these resources to students according their necessities. These system are linked to an ubiquitous middleware, which provides abstractions for applications to utilize features and technologies present in ubiquitous environments.
In general the ubiquitous middleware is responsible to manage context information that describes the environment [6
]. Figure 3
shows an overview of the entities and technologies used in a scenario based on the use of CARLO recommender system [38
As may be observed, CARLO uses the context information provided by an ubiquitous middleware. The context is modeled in an ontology network, based on PiVOn Ontology [25
]. The ontology was extended from PiVOn to represent specific context of interest of CARLO. In parallel a MOOC platform, used by the university, uses a RDBMS to store content about MOOC courses and statistics about the use of the courses by students. To use the context information provided by CARLO in the MOOC platform, integration models as presented in [11
] and are used to avoid a total re-engineering in the IS.
Based on the concepts and technologies presented earlier, let us imagine the following scenario, based on a description of a real-world application in [16
“A university uses an ubiquitous middleware to manage the educational resources present. This middlware uses an ontology as a context representation model. This ontology contains a set of axioms that, together with a set of processes managed by the middleware and allows data collected from the environment to be aggregated, and after inferences, considered as context information. Thus, the context management of interest in this ubiquitous middleware infers the situations in which students and teachers are. This university also has a recommendation system that presents warnings to students about educational resources that may be of interest. Therefore, the context of interest of each student and teacher is inferred and send to the recommendation system, which recommends to the student or teacher an interesting resource available at the university.
A MOOC in the area of psychology and medicine is composed of cases of Motivational Interview (MI). Motivational Interview is a patient-centered approach to demonstrate behavioral changes to increase effectiveness in treating diseases such as obesity, smoking, and depression. The group of techniques that make up a MS has been constantly studied in a wide range of health-related behaviors . To prepare the MOOC on motivational interview, the technical staff assembled the course with a structure composed of topics, where each topic consists of a supporting material (video, document or presentation) and a set of questions. If the student correctly answer the questions related to that topic, the progression to the next one is allowed and new motivational interview situations are introduced. The student completes the course when has finished all the tasks related to the basic concepts and has finished more than 50% of the exercises related to the specific motivational interview cases applied to different contexts (illness to be treated, motivation, among others).
Johan is a student of the university’s collective health course who has the ubiquitous middleware and resource recommendation system. At the time of enrollment, Johan registered his smartphone in the system of the university and from there began to receive recommendations of resources. Johan lives in the city of Smartville. Currently, Johan attends the semester, and in a class of Applied Psychology to Health, the system of recommendation used by the university sends an alert to Johan in his smartphone recommending to enroll in the MOOC on Motivational Interviews. During the break of class, John visualizes the alert and enrolls in the Motivational Interview MOOC. At this point, the MOOC asks Johan for permission to receive context information managed by the university’s ubiquitous middleware. Johan authorizes and the MOOC begins to receive constant updates of contexts of interest.”
During a class break, Johan accesses the MOOC using a smartphone and notes the existence of 3 conceptual introduction topics.
4. UPcad Methodology
The UPcad (Unified Process for Integration between Context-Awareness and Domain) methodology is defined as an extension of UPON [15
] and UP (Unified Process) [14
] methodologies. UPON was defined for the creation of large scale ontologies. As the representation of context information is frequently made using ontologies, UPCaD is proposed as a methodology to guide the execution of models that uses context information in data querying.
4.1. Methodology Overview
The methodology is composed of evolutionary implementation cycles and composed of workflows that describe a series of operations. The number of implementation cycles varies according to the designers’ need to associate: (i) New context elements in information filtering; or (ii) A new source of information (database) is inserted in the filtering process.
Each workflow has inputs and outputs. In each deployment cycle, five workflows, named: Context Workflow, Domain Workflow, Alignment Workflow, Serialization Workflow and Query Test Workflow, must be executed. The methodology overview is presented in Figure 4
The support of experts in the field of application and ontology engineers varies according to each one of the workflows that compose the phases of the methodology. The structure of workflows for each phase with inputs and outputs was based on the UPON methodology [15
], which is used directly only in Phase 1 (Context Workflow).
4.2. Context Workflow
Context Workflow is a set of steps to define or extend an ontology that describe context to be used in query extension by applying a query contextual model. This workflow follows the steps defined in the UPON Methodology, with some modifications in the order of execution and creation of new process in this methodology.
An overview of the Context Workflow processes is presented in Figure 5
. The processes not highlighted in the figure were defined in the UPON Methodology, and are described as follows:
Definition of the Domain of Interest and Scope: It consists of the identification of the main concepts that must be represented, their characteristics and the definition of the domain’s scope that will be represented. For this, ontological commitments must be made in the process.
Writing Storyboards: In this step the application domain expert wrote one or a series of Storyboards that describe sequences of activities in a given scenario. This scenario describes a set of application scenarios and contexts;
Creation of the Application Lexicon (LA): The lexicon of the application is a set of the main terms contained in documents, collected from the storyboards developed in the previous process;
Identifying Competency Issues (QC): Competency issues are conceptual-level issues that the resulting ontology must be able to respond to. They are defined based on interviews with domain experts, users, and developers. They define the coverage and depth of the ontology representation scope over the modeled domain. In the UPON Methodology, there are two types of Competency Issues defined: (i) oriented to the discovery of resources or content; and (ii) semantic interoperability between different schemes. In the developed methodology only questions of the first type are considered;
Importing Vocabulary Used in Ontology-Based Context Modeling: It is envisaged to import the vocabulary from a context representation ontology at this stage to avoid the redundancy of concepts in the modeling phase;
Creating the Reference Lexicon (LR): The reference lexicon is defined from the set of concepts described in the application lexicon, in the imported vocabulary of the context ontology and in the information of the documents used in the previous stages;
Creating the Reference Glossary (GR): The reference glossary is defined by adding informal definitions (sentences about the concept) to the LR using natural language;
Modeling Concepts: Each concept is categorized through the association of type with the concept. The concepts are categorized using the context ontology as top ontology;
Modeling Hierarchies and Relationships: At this stage the categorization of concepts described in a taxonomy is complemented with domain-specific relations, aggregation relations and generalization;
Formalization of the Ontology Using the Formal Language: In this step the concepts and relationships modeled in the design workflow are formalized in the OWL-DL language. This formalization is accomplished through the definition of an ontology, which imports the concepts of the context ontology, forming a network of ontologies.
To perform the ontology validation a check-list must be performed, according to different characteristics: (i) Syntactic quality; (ii) Semantic quality; (iii) Pragmatic quality and (iv) Social quality [15
Checking Ontology Consistency Using Inference Engine: Checking the consistency of the ontology must be performed using an inference engine. Here the existence of contradictions in the definition of the ontology are discovered. If there is any contradiction, the inference engine informs the ontology engineer, who should review the process of creating the ontology;
Importing CI Ontology: In addition to importing the context ontology, it is necessary to import the CI ontology. Thus, the concepts created in the extended context ontology should be sub-concepts of the classes of the CI ontology;
Using the Integration Algorithm with CI Ontology: After the importation of the ontologies’ vocabularies, the designers must integrate these ontologies, generating a net of ontologies. This network of ontologies will be used in the next steps of the workflow.
After the application of the Context Workflow, a network of ontologies is generated, with verified consistency through the use of an inference engine and of ontology engineers through the processes of verification of the ontology coverage and the answer of questions of competence. This ontology network is used as input to the Domain Workflow, shown below.
4.3. Domain Workflow
Domain Workflow defines a series of processes for integrating domain information in the model. An overview of the workflow is presented in Figure 6
. The workflow consists of the following steps:
Obtain sources of information: At this stage the domain expert must obtain the information sources that will be used in the model development. As the focus of this research is the query extension model, a database schema must be employed as the source of information in this step;
Select a formal language: In this step, the ontology engineer makes the choice of a formal language that will represent the database schema. As the ontology network generated in Context Workflow is represented in the OWL-DL language, it is recommended to choose the OWL-DL or RDF languages in this step for compatibility purposes;
Obtain representation of the database schema in the formal language
: Because the context representation in the model is made in the OWL-DL language, the database schema used in the model must be represented in the same language. Currently, there are tools capable of converting relational schemes to OWL-DL files through R2RML mappings. RDBToOnto is an example of a tool that is used in several projects to do this [33
Check consistency using inference engine;
Import and Integrate the DI Ontology: After the first verification, the ontology engineer imports the DI ontology;
Check the consistency using the inference engine;
Import the ontology network resulting from the Context Workflow: The ontology network generated in the Context Workflow is imported and integrated in the ontology network generated in the Domain Workflow;
Check for consistency using the inference engine.
After these steps, the context and domain definitions are interconnected through a network of ontologies. This allows binding rules to be set to interconnect the settings at the time of the data query.
4.4. Alignment Workflow
In Alignment Workflow the stages of defining a series of contexts are taken into account in the process of extending queries and the queries involved in these processes. An overview of the workflow is presented in Figure 7
The workflow consists of the following steps:
Obtain the resulting ontology from Domain Workflow;
Create list of contexts related to each query (LCxt): Based on the competence issues, the application domain expert and the ontology engineer create a list of: (i) context entities; (ii) context attributes and (iii) semantic contextual relations, which are related to the competence issues defined earlier. These elements will be used in the definition of linking rules;
List queries based on competence issues: In this step, the expert in the application domain will create a list (LC), the set of queries that are used in the system and that may be contextualized;
Create DomainAsContext linking rules: In this step, if necessary, the modeler creates the rules of DomainAsContext type. These rules are different in relation to other rule types because they create new individuals in the network of ontologies, allowing new semantic relationships to be created;
Create Relationships in Ontology: If domain and context rules are created, new individuals are created in the ontology network. After the creation of these individuals, new semantic relationships can be defined in the network of ontologies with these new individuals;
List relational algebra expressions associated with each rule (EA): From LCxt and LC, the expert in the application domain defines a series of relational algebra (EA) expressions that represent the filtering that must be performed according to each context element, for each query;
Modeling linking rules (RL)
: Using LCxt, LC and EA, the domain expert models the linking rules, using the definitions presented previously in [13
Test linking rules: After the definition of the linking rules by the modeler, the set of defined rules and the network of ontologies generated in the process are serialized in a DB instance. In this step, the modeled rules are tested. In this testing step, rules are tested only in terms of syntax and semantics. These tests are done through contexts of interest, entered manually, and verification of the return of the queries by the domain expert.
4.5. Serialization Workflow
Serialization Workflow deals with the set of processes required to assure the persistence of the definitions made previously (definition of the connection rules and the network of ontologies). A view of the Serialization Workflow is presented in Figure 8
The workflow consists of the following steps:
Obtain the Alignment Workflow linking rules;
Obtain an ontology network resulting from Alignment Workflow;
Obtain domain database schema;
Serialize network of ontologies: After the step of collecting the information generated in previous processes, the network of ontologies is serialized in a format compatible with the model. In the prototypes developed to support the model, we used the serialization of ontologies in the JSON-LD format;
Serialize linking rules: The serialization of binding rules occurs in the execution of the algorithms associated with each rule. Each of the algorithms generates a JSON-LD file that is persisted in the same instance of the relational database that stores the domain information;
Verify consistency: After realizing the persistence of the ontology network used by the model, one must verify the consistency of the information. To perform this check, it is recommended to use an inference engine.
After the persistence of information, the methodology provides for the execution of tests, to evaluate the application of the model in the system. The workflow that describes the evaluation is presented in the next section.
4.6. Query Test Workflow
To perform the evaluation of the application of the model in the system/application scenario, it is necessary to integrate the query extensor algorithm to the system and carry out the evaluation. Two processes must be performed after applying the model to the system:
Apply query expansion algorithm: The application of the query expansion algorithm is performed by associating the integrator module with the information system;
Verify answers to competence questions: In this step, the expert in the application domain verifies, through the accomplishment of queries or verification with the users, if the queries answer the questions of competence listed in the first stage of the methodology.
An overview of the workflow is presented in Figure 9
To evaluate the UPCaD methodology, we applied it in the scenario presented in Section 3
, based on the use of CARLO recommender system and the MOOC platform used in a university. Figure 10
presents the overview of the process that composes the workflow. Each process was assigned with a number that are referenced in the text that describes the evaluation. The evaluation process of the methodolgy was based in the evaluation used by related works [11
]. To evaluate the methodology, we implemented the SQL-eCO plugin for Protégé. Figure 11
shows an example of interface in SQL-eCO plugin (More information about the developed plug-in is available at: https://github.com/viniciusmaran/SQL-eCO-plugin-public
. The plugin offers a set of services to modelers create linking rules and test it in relational queries. The plug-in was developed in Java, using the set of Protégé software API to manage and verify consistency of ontological models. The plug-in is composed by four user interfaces, which are used to configure the three types of linking rules and configure and test relational queries using the extension of the queries. After defining and testing the rules in the plug-in, modelers can export the definitions for a particular RDBMS schema).
To make the evaluation of the methodology, five actors developed the implementation of the methodology. Three actors are related to the development of the CARLO recommender system. They are ontology engineers who maintain CARLO ontologies. The other two actors are RDMBS administrators, who makes queries and maintain the RBDMS with the information about the MOOCs.
It is important to notice that we not used recall and precision measures in this paper as the focus of the paper is the methodology and not the evaluation of the linking rules or the CAR model. The evaluation of the model, considering specifically the linking rules was presented in [13
The first step in the integration process is to determine the domain of interest and the scope of the definition of the extended context ontology (1) from the use of the ontology employed in the ubiquitous middleware.
5.1. Application of the Context Workflow
Considering the application scenario presented in Section 3
, the domain of interest of the context ontology is the presentation of modules of the motivational interview course according to context information from the ontology used by the ubiquitous middleware. Scope is defined as contextualized selection of modules and support materials according to specific student situations, context information related to profiles, devices, location, student interests and learning focus.
From the definition of the domain of interest and scope, a set of storyboards (2) was defined that describes the situations of use of the system, with the context involved in each of these situations. The storyboards are taken from the description of the application scenario.
Storyboard 1: Johan is a public health student at the university campus. He registers and accesses the motivational interview MOOC during the break of one of the classes through his smartphone while staying at the cafeteria. As Johan enrolled in the course and makes the first access, he is presented only to course modules related to learning basic concepts of motivational interviewing. Since Johan is in the range of classes, he uses a device with limited viewing capabilities and the range is 15 min, no backing material with videos longer than 10 min is displayed;
Storyboard 2: Johan is a public health student on the university campus. He enrolled in the MOOC on motivational interviewing, and during one of his undergraduate classes, he introduced the MOOC to some of his colleagues. As Johan is in class, the MOOC presents only general information about the course, such as the presentation about the course and the discussion forum;
Storyboard 3: Johan is a public health student on the university campus. Johan is interested in areas such as promoting physical activities for patients and combating smoking. This interest is due to two main factors: (i) Johan’s father smokes; and (ii) John has a close friend with behavioral obesity. Johan goes to the computer lab after lunch and accesses the motivational interview MOOC through one of the lab computers. Johan has already completed activities related to learning basic concepts of motivational interviewing. Thus, the parts of the course related to the concepts of “listening to patient’s motivation", “resistance to correction reflex" and “empowering the patient" are presented to John. We present only the cases where the focus is the promotion of physical exercises or the fight against smoking.
After defining the storyboards, the Application Lexicon (3) was defined, presented in Table 2
. The application lexicon was defined based on queries about the concepts related to the course of MS and the context ontology.
Based on the previously created storyboards and the definition of the application logic, it was defined as competence issues able to respond (4) after the application of the linking rules. The set of competence and evolution issues in Table 3
After defining competence issues the context ontology vocabulary is imported (5). The vocabulary of the context ontology consists of the names of the represented classes, object properties, and data properties. Table 4
presents the vocabulary terms imported from the ontology.
The next step in the implementation of the methodology is the creation of the reference lexicon (6). The lexicon is generated by the intersection of the lexicon of the application with the vocabulary of the context ontology. From the reference lexicon, the reference glossary (7) was defined, with a description of the meaning of each of the terms.
The reference glossary generated (Table 5
) is employed in the modeling of concepts (8) and relations (9) in the network of ontologies. Each glossary term is classified according to its type of information and related to the terms already defined in the context ontology. The semantic network of concepts is presented in Figure 12
. As can be seen in the figure, some of the concepts are modeled as classes, while other concepts as individuals in the semantic network. In addition, relationships are created between the new definitions and the existing definitions in the context ontology. The settings created in this process is displayed in the Figure 12
The language (10) for the representation of the semantic network is the OWL-DL language, to maintain compatibility with the representations used by the context ontology. The ontology network is formalized using Protégé (11). After the formalization of the ontology network the consistency of the ontology network is verified using the Pellet inference engine (12).
The CI ontology was imported (13) and integrated (14) to the ontology network generated in the previous process using the SQL-eCO plugin. This network of ontologies is used in the next stages of the methodology.
The ontology coverage check (16) is done by the expert in the application domain and by the ontology engineer. This verification is performed by verifying whether all contexts of interest were modeled on concepts in the ontology. After defining the network of context representation ontologies, the workflow related to the application domain is executed.
5.2. Application of the Domain Workflow
THE access the database schema (17) through the MySQL is configured using an SSH connection. The language for the formalization of the schema (18) is the OWL-DL language to maintain compatibility with the context ontology used in the middleware. The OWL-DL representation of the database schema (19) is obtained using the RDBToOnto tool. Table 6
presents an overview, in number of elements, of the OWL-DL resulting from the conversion process.
The OWL-DL representation is validated syntactically and semantically (20) using the Pellet inference engine. After this verification, the SQL-eCO tool is used to import (21) and perform the integration (22) of the DI ontology with the OWL-DL representation of the database schema. As output from this process, an ontology network is generated and stored. After completing this step, the consistency of this network of ontologies is verified with the Pellet inference engine (23).
The ontology network resulting from this process contains only representations relative to the application domain. To be able to create binding rules considering an ontology network it is necessary to integrate the ontology network resulting from Context Workflow (24). This integration is accomplished through the SQL-eCO plugin defining the location of the ontology networks resulting from Context Workflow and the integration between the DI ontology and the OWL-DL representation of the domain database schema. As a result of this process, a network of ontologies is generated. This network of ontologies has the consistency verified (25). The ontology network is used as an Alignment Workflow to create linking rules. The process of creating the rules is presented in the next section.
5.3. Application of the Alignment Workflow
From the ontology network generated in the previous workflow and the domain database, the binding rules can be defined and tested. To do this, the file representing the resulting ontology network was opened in the Protégé software (26) and a connection to the domain database is configured with the SQL-eCO plugin.
To model the rules of connection it is necessary to list the context elements of interest related to each competence issue (27). Table 7
presents the context elements related to each of the competence questions and their classification in the context ontology (class, object property, data property or individual).
In addition to the context elements listed, the consultations related to each issue of competence are listed (28). These queries are listed in this step as they are used by the information system, without modifications. Table 8
presents these queries.
Queries are repeated on different competence issues, as some of the competence questions are related to the same portion of domain data, but vary in context elements. In the application scenario, the QC1 and QC2 competency questions use the same query, which returns the tree structure of the modules and their submodules, as well as the JSON definition of these modules.
The QC3, QC4, and QC5 questions also use the same query returning the basic course structures, with the main modules of the course carried out by the student that has their id informed in the WHERE clause of the query. Since none of the course modules were related to any MS technique and the definition of individuals in the Technique class, it was necessary to define the DomainAsContext rule (29) for each of the main modules of the motivational interview course , associating them with the CoursePart class. An example rule created is shown below: .
After defining the rules and applying them using the SQL-eCO plugin, the ontology network is modified by inserting the individuals that represent the modules in the context ontology. With the creation of these individuals, it was possible to associate each course part with a Technique class individual, who represents a MS technique (30). Figure 13
shows an example of the has_interest relationships performed between the individuals of the CoursePart class and the Technique class.
After defining DomainAsContext rules and realizing the necessary links in the ontology, the linking rules related to the context elements of interest related to competence issues are modeled (31).
Each query has a query identifier, which is used later in the definition of linking rules.
Query1 has been set to allow all course modules to be displayed, but limiting the display of modules containing videos that are less than 10 min (600 s) long.
Query2 was defined to limit only the presentation of modules that do not have associated videos, due to the limited capacities of devices that are in the context associated with competence issue 2.
Queries3 and 4 were defined to present the modules that represent motivational interview cases related to the promotion of physical exercise and the fight against smoking. The Query5 has been set to only allow the visualization of the modules of type type, vertical or overview. Thus, the basic structure of the course is presented to the student.
The SQL-eCO plugin is used to model the connection rules (32). The rules are modeled selecting the context element of interest related to each rule and the definition of the query in SQL. Table 9
shows the modeled linking rules.
After modeling the linking rules, they are executed with the SQL-eCO plugin (33). The tests related to the binding rules are performed using OWL-DL files that represent a context of interest based on the context ontology resulting from the Context Workflow and the realization of queries associated to each of the contexts of interest. To perform the tests in this phase of the methodology, the contexts of interest related to each of the storyboards describing the application scenario are modeled. Contexts of interest are modeled with Protégé. The first context of interest (Figure 14
) is related to the situation presented in storyboard 1.
The second context of interest (Figure 15
) is related to the situation presented in storyboard 2.
The third context of interest (Figure 16
) is related to the situation presented in storyboard 3.
In this phase, only the absence of empty sets is verified as a result of the queries. To carry out this verification, the queries Query1, Query2, Query3, Query4 and Query5 are tested to verify if they represent the expressions related to each of the competence issues.
In addition, Cons1 and Cons2 queries are also tested. Verifications are carried out with the three contexts of interest in conjunction with the two relational queries. Verification of competency responses is performed with the Query Test Workflow. After verifying the connection rules, the Serialization Workflow is performed.
5.4. Application of Serialization Workflow
The process of serializing the definitions used by the framework is executed with the SQL-eCO plugin. In the configuration tab, the ontology locations used in the ontology network (35) are marked and the database connection (36) is configured. In addition, the file containing the definitions of the connection rules (34) is used. The serialization of the JSON (37) format is performed and then persisted in the MySQL relational database. In addition, the linking rules (38) and persistence in the same database instance are serialized.
To verify the consistency of the definitions after their persistence in the relational database (39), the SQL-eCO plugin is used in conjunction with the Pellet inference engine. No consistency errors were found. After completing this step, the query extension tests were performed to evaluate the returned tuples after the application of the extended query algorithm. The testing process is presented in the next section.
5.5. Application of Query Test Workflow
The application testing and evaluation workflow is performed in two parts: (i) Applying the expansion query algorithm (40) and (ii) Verifying responses to competence questions (41).
To apply the query expansion algorithm, an API must be imported into the information system project in which it is desired to perform the query extension. Thus, the queries used in the system are not performed directly through the JDBC driver originally used, but through the API that redirects the query to the extender. The API was imported into the prototype that implements a query simulator. After this stage, the evaluation of the consultations and the results of these consultations were carried out after the application of the framework, varying the context of interest.
The extension queries evaluation is performed analyzing the returned tuples in the queries comparing the results of these queries with the queries that would normally be performed and the expected results of these queries. This comparison is made based in competence issues. Initially, we analyzed the quantities of tuples returned in the queries comparing the quantities returned in each query with and without the framework, considering each of the contexts of interest presented in Alignment Workflow. Figure 17
shows the number of resulting tuples in each of the queries, considering the queries performed with and without the framework considering each of the three contexts of interest based on the application scenario.
It may be perceived in Figure 17
that there was a decrease in the number of resulting tuples when the framework was used in conjunction with a context of interest to the extended query algorithm. Regarding the realization of queries in the context of interest (a), there was a decrease of 14.84% in the number of tuples returned in the Cons1 query, while in the Cons2 query there was no change in the number of tuples returned. This is because the query extension, in this case, filters only the video-type course modules, whereas the Cons2 query returns only course structure definition modules.
Considering the contexts of interest (b) and (c), there were changes only in query Cons2. This is due to the fact that this query returns course elements that describe sections and cases of motivational interviewing. The Cons1 query in turn returns only the modules that describe the last level of the course structure tree, with the modules that describe the supporting materials. Considering the context of interest (b); there was a decrease of 65.57% in the number of tuples returned. Considering the context of interest (c), there was a decrease of 44.26% in the number of tuples returned.
In addition to the quantitative analysis of the tuples returned in each query considering the context of informed interest, the results of the queries were analyzed considering the competence issues defined in the Context Workflow. The result of the analysis is presented in Table 10
It is possible to perceive that the results of the queries tested with the contexts of interest derived from the storyboards used in the application methodology are compatible with the competence questions highlighted from the storyboards.
It was possible to observe that some of the defined linking rules were defined based in context of interest that could be used in other application domains. As an example we can cite the linking rules defined for competence question 2 (QC2), which was related to device screen. Based on this fact, we can assume that the application of the methodology can be applied in other application domains.
In the evaluation process, it was possible to observe that some of the competence questions (QC3 and QC4) were defined to filter educational content according to student’s context.
It is important to note that in the evaluation of UPCad, no tests were performed in relation to the evolution of the data scheme. As predicted in other works related to the CAR area [7
], the evolution of the data schema causes a revision in the forms of integration.
6. Conclusions and Future Work
Context-awareness has been applied in several ways in ubiquitous systems, such as in choosing services for execution, adapting graphical interfaces and retrieving content. The sensitivity to context modeled in ontologies presents a series of advantages over other models of representation and for this reason, it has been constantly used in works related to the ubiquitous computing.
In this context, this paper presented the UPCaD methodology, based on well known software engineering methodologies to guide the integration between the modeling of context and RDBMS data querying. The evaluation of the methodology in the scenario using the CARLO recommender system and the RDBMS which stores the information about MOOC courses in the university showed the validation of the methodology as a guide for integration between context and domain data.
It was observed during the evaluation of the methodology that a great effort was made by the team of ontology engineers and experts in the application domain to define the connection rules.
According to the team who applied the methodology, which have experience with the development of ontologies for context-aware applications, and the RDBMS administrators, the UPCaD methodology aid the use of CAR model. It was mentioned that the usage of common process with UP methodology helped in the learning curve of the methodology. As future works, we pretend to verify and measure the learning curve, varying the team who applies the methodology and the application domain.
With the evaluation of UPCaD methodology, the following findings were obtained:
The strength of the proposed approach lies in the UPCaD being a methodology based on well known process, provided by UP and UPON methodologies, to guide the implementation of recent CAR models;
The definition of the methodology based on workflows provided the possibility of multidisciplinary teams to work together, with the involvement of each team varying in each workflow;
The implementation of the workflow was supported by existing and common used tools. Examples of tools were cited in the evaluation, as Pellet reasoner, SQLeCO plugin, R2RML tool, Protege and UML;
At this time, the methodology only uses the algorithms previously defined in the CAR model to semi-automatize the implementation of UPCaD. One of the possible future works would be the implementation of algorithms for semi-automation of some of the steps and processes of the methodology reducing the effort of the team in the implantation.