Next Article in Journal
Value Modeling for Ecosystem Analysis
Previous Article in Journal
CSCCRA: A Novel Quantitative Risk Assessment Model for SaaS Cloud Service Providers
Previous Article in Special Issue
An Attribute-Based Access Control Model in RFID Systems Based on Blockchain Decentralized Applications for Healthcare Environments

Computers 2019, 8(3), 67; https://doi.org/10.3390/computers8030067

Article
Process-Aware Enactment of Clinical Guidelines through Multimodal Interfaces
Dipartimento di Ingegneria Informatica Automatica e Gestionale Antonio Ruberti—Sapienza Universitá di Roma, 00185 Rome, Italy
*
Authors to whom correspondence should be addressed.
Received: 21 March 2019 / Accepted: 10 September 2019 / Published: 11 September 2019

Abstract

:
Healthcare is one of the largest business segments in the world and is a critical area for future growth. In order to ensure efficient access to medical and patient-related information, hospitals have invested heavily in improving clinical mobile technologies and spreading their use among doctors towards a more efficient and personalized delivery of care procedures. However, there are also indications that their use may have a negative impact on patient-centeredness and often places many cognitive and physical demands on doctors, making them prone to make medical errors. To tackle this issue, in this paper, we present the main outcomes of the project TESTMED, which aimed at realizing a clinical system that provides operational support to doctors using mobile technologies for delivering care to patients, in a bid to minimize medical errors. The system exploits concepts from Business Process Management (BPM) on how to manage a specific class of care procedures, called clinical guidelines, and how to support their execution and mobile orchestration among doctors. To allow a non-invasive interaction of doctors with the system, we leverage the use of touch and vocal user interfaces. A robust user evaluation performed in a real clinical case study shows the usability and effectiveness of the system.
Keywords:
TESTMED project; healthcare; mobile device; process-awareness; multimodal interface; clinical guideline

1. Introduction

Healthcare is conventionally regarded as the act of taking preventative or necessary medical procedures to improve a person’s well-being. Such procedures are typically offered through a healthcare system made up of hospitals and professionals (such as general practitioners, nurses, doctors, etc.) working in a multidisciplinary environment with complex decision-making responsibilities.
With the advent of advanced health information technology (HIT) and electronic health records (EHR) in the mid-2000s [1], hospitals started to manage and share patient information electronically rather than through paper records. This has led to a growing usage of handwriting capable mobile technologies and devices able to sync up with EHR systems, thus allowing doctors to access patient records from remote locations and support them in the delivery of care procedures. Consequently, it is not unusual that a doctor visits a patient interacting with several mobile devices at the same time.
Notwithstanding the benefits of EHR systems and mobile technologies towards improving the delivery of care procedures [2,3,4,5], there are also indications that their use may have a negative impact on patient-centeredness [6]. This often results in higher physical and cognitive efforts of doctors while visiting patients, making them more inclined to make medical mistakes [7] and lose the rapport with their patients [8,9]. However, as pointed out in [10], multi-tasking and information transfers through EHR systems have become necessary aspects of healthcare environments, which can not be avoided entirely.
While the realization of a technological solution that supports doctors in the enactment of care procedures through mobile devices requiring a limited physical/cognitive effort for their usage, and that ensures the continuity of information flow through EHR systems, would be desirable, to date, most of the existing solutions focus exclusively on one aspect of the foregoing requirements or a partial combination of them [11].
On the one hand, the Human–Computer Interaction (HCI) community has investigated how the use of multimodal interfaces has the potential to reduce the cognitive efforts on users that manage complex activities such as the clinical ones. For example, in [12], the authors state that “multimodal interface users spontaneously respond to dynamic changes in their own cognitive load by shifting to multimodal communication as load increases with task difficulty and communicative complexity”. Furthermore, recent research by Pieh et al. [13] has shown that multimodal approaches to healthcare deliver the most effective results, compared to a single modality on its own.
On the other hand, the Business Process Management (BPM) community has studied how to organize clinical activities in well-structured healthcare processes and automate their execution through the use of dedicated Process Aware Information Systems (PAISs). PAISs are able to interpret such processes and to deliver to doctors and medical staff (e.g., nurses, general practitioners) relevant information, documents and clinical tasks to be enacted, by invoking (when needed) external tools and applications [14]. Nonetheless, current BPM solutions, which are driven by predefined rules lists, have proven to be suitable to manage just the lower-level administrative processes, such as appointment making, but have made little progress into the core care procedures [15].
Based on the foregoing, in this paper, we present the main findings of the Italian project TESTMED (TESTMED was a 24-month Italian project, and stands for “meTodi e tEcniche per la geSTione dei processi nella MEdicina D’urgenza”, in English: “methods and techniques for process management in emergecy healthcare”), whose purpose was to design and develop a clinical PAIS, referred to as TESTMED system, which investigated touch and vocal interfaces as a potential solution to reduce the cognitive load of doctors interacting with (clinical) mobile devices during the patient’s visit, and a process-aware approach for the automation of a specific class of care procedures, called clinical guidelines (CGs). CGs are recommendations on how to diagnose and treat specifical medical conditions, presented in the form of “best practices”. They are based upon the best available research and practice experience [16,17,18,19].
The objective of the project was not on automating the clinical decision-making, but on supporting doctors in the enactment of CGs, delivering them the relevant clinical information (such as the impact of certain medications, etc.) to reduce the risk arising from a decision. The system exploits concepts from BPM on how to organize CGs and how to support their execution, in whole or in part. In addition, the system supports vocal and multi-touch interaction with the core clinical mobile devices. This allows the doctor to switch between different modes of interaction selecting the most suitable (and less distracting) one during a patient’s visit.
The TESTMED system has been designed through the User Centered Design (UCD) methodology [20] and evaluated in the emergency room of DEA (“Dipartimento di Emergenza ed Accettazione”, i.e., Department of Emergency and Admissions) of Policlinico Umberto I, which is the main hospital in Rome (Italy). The target was to demonstrate that the adoption of mobile devices providing multimodal user interfaces coupled with a process-oriented execution of clinical tasks represents a valuable solution to support doctors in the execution of CGs.
This paper extends our previous works [21,22] in several directions by including many new elements that were previously neglected. To be more specific:
  • The introduction has been partially rewritten and extended;
  • A refined background section describing the characteristics of healthcare processes under different perspectives has been provided;
  • The description of clinical guidelines is more detailed and complete;
  • The section describing related works has been extended significantly with a new contribution describing the state-of-the-art of vocal interfaces;
  • An improved user evaluation section discussing the complete flow of experiments to evaluate the effectiveness and the usability of the system has been proposed, measuring also the statistical significance of the collected results;
  • All other sections of the paper have been edited and refined to present the material more thoroughly.
The rest of the paper is organized as follows: Section 2 provides a relevant background knowledge about healthcare processes and CGs, and introduces a concrete CG that will be used to explain the approach underlying the TESTMED system. Section 3 describes the general approach used for dealing with the enactment of CGs, while Section 4 presents the architecture of the system, introducing technical details of its software components. Then, Section 5 presents the outcomes of the user evaluation of the system and some performance tests. Finally, Section 6 discusses relevant works and Section 7 concludes the paper by tracing future work.

2. Background

2.1. Healthcare Processes

In the context of a hospital, the work of the doctors and the medical staff includes the enactment of several organizational and clinical tasks, which are organized in a care pathway customized for the patient to be visited. In addition, various organizational units are involved in the care pathway of a patient. For example, for a patient treated in the department of cardiology, general blood tests at the laboratory and a thoracic RX at the radiology department are required. This means that doctors from different departments must visit the patient, write medical reports and share the clinical results, i.e., all clinical tasks must be performed in certain orders and cooperation between different organizational units is required to properly achieve such tasks [11].
Based on the foregoing, it is possible to identify several healthcare processes having growing complexity and duration. For example, there are short organizational procedures like patient acceptance or long-running treatment processes like physiotherapy. According to [23], healthcare processes can be classified into two abstract groups: elective care and non-elective care.
  • Elective care refers to clinical treatments that can be postponed for days or weeks [24]. According to [25], elective care can be classified into three subclasses: (i) standard processes, which are care pathways where the ordering of activities and their timing is predefined; (ii) routine processes, which are care pathways providing potential alternative treatments to be followed for reaching an overall clinical target; and (iii) non-routine processes, where the next step of the care pathway depends on how the patient reacts to a dedicated treatment [26].
  • Non-elective care refers to emergency care, which has to be enacted immediately, and urgent care, which can be procrastinated for a short time.
To additionally understand the complexity of a healthcare process, it is possible to classify it in six macro steps [27], organized according to the degree of structuring and predictability they exhibit [28]. Figure 1 shows such a classification.
The six macro steps include:
  • patient registration, which consists of creating a medical case file;
  • patient assessment, where an initial diagnosis for the patient is performed;
  • treatment plan definition, which refers to the realization of (dedicated) individual care plan;
  • treatment delivery, which consists of enacting the clinical actions provided by the care plan;
  • treatment review, which consists of a continuous monitoring of the impact and efficacy of enacted treatments, in order to provide feedback for the previous steps;
  • patient discharge, consisting of the closure of the case file.
Administrative steps such as patient registration and patient discharge, as well as organizational activities such as lab tests and patient transfer, are typically structured and repetitive. Any potential exceptional behaviour is limited and can be anticipated at the outset. For this reason, they are good candidates for being enacted by traditional approaches for process management [11].
On the contrary, all the other diagnostic and therapeutic steps of the healthcare process can be seen as knowledge-intensive activities, since they depend on medical knowledge and evidence, on patient-specific and case data, and on doctors’ experience and expertise. Moreover, many complicating circumstances—often not easily predictable in advance—may arise during their enactment. For this reason, they typically lead to loosely structured or unstructured processes [29].
To sum up, the overall healthcare process, even in its oversimplified view of Figure 1, reflects the combination of predictable and unpredictable elements, making their complete automation through traditional Process-aware Information Systems (PAISs) extremely complex, which tend to restrict the range of actions of doctors and medical staff too much [28].
Although it is clear that a gap exists between the process-driven techniques provided by the BPM community and the methodological-driven solutions suggested by the medical informatics field that is unlikely to be solved in near future [11], in this paper, we discuss how a process-aware approach can be efficiently used to support the management of a specific class of care procedures, called clinical guidelines.

2.2. Clinical Guidelines

Over the last year, there was an increasing interest from the medical community to investigate and develop evidence-based clinical guidelines (CGs). CGs are based on the best available medical research evidence, and are represented in the form of recommended care pathways to support proper decision-making in patient care for specific medical conditions [17,18,30,31]. Typically, a CG does not enforce mandatory requirements but is used as a reference framework for evaluating clinical practice.
As shown in Figure 2, CGs are defined to capture domain-specific knowledge, and need to be complemented by additional “knowledge layers” that include doctors’ basic medical knowledge (BMK), site-specific knowledge and patient-related information (such as medical history and current conditions) to obtain concrete care pathways that can be applied to a specific patient. Care pathways suggest the required clinical activities, together with their sequencing and timing, for the management of patient conditions. According to [14], the combination of care pathways and patient-related information enables the definition of an individual care pathway, which results in the actual patient treatment process.
One strength of adopting CGs and care pathways in the management of patient care relies on their structuredness and process-oriented perspective, differently from traditional clinical decision-making that leads to loosely structured or unstructured working procedures. The research literature in the medical informatics community has deeply investigated the definition of models, languages and systems for the management of CGs and care pathways, focusing on the so-called “computer-interpretable clinical guidelines” (CIGs). The work [28] presents a recent survey classifying the existing frameworks for managing CIGs, which can be categorized as rule-based (e.g., Arden Syntax), logic-based (e.g., PROforma), network-based (e.g., EON) and workflow-based (e.g., GUIDE). Most of the available formalisms allow for representing CGs as task networks, where activities and decisions are related via scheduling and temporal constraints, often in a rigid flowchart-like structure. This has made the automated enactment of CGs using PAISs and process-oriented approaches as a relevant and timely challenge for the medical community [11,28,32].
In this paper, we tackle this challenge by presenting the main findings of the TESTMED project, whose aim was to realize a clinical PAIS able to interpret CGs and orchestrate their execution among doctors and medical staff through mobile technologies and multimodal user interfaces.

2.3. Case Study: Chest Pain

For a better comprehension of the TESTMED project as a whole, in this section, we provide details about a standard CG enacted for patients suffering from chest pain. Chest pain is defined as a pain or discomfort that is felt anywhere along the front of the body between the neck and upper abdomen. It can be an indicator of a possible heart attack, but it may also be a symptom of another disease. Chest Pain is considered as one of the most common reasons for the admission in the emergency room (5% of all visits) with high mortality in case of diagnosis failure and improper dismissal (2–4%) [33]. When a patient suffering from chest pain reaches the emergency room, typically s/he is visited by a doctor, who investigates the patient history and the risk factors. Furthermore, a chest pain score is calculated, which enables the doctor to classify patients into low and high-risk subsets for cardiac events.
Figure 3 shows the chest pain score adopted by DEA. The score is derived from a set of four clinical characteristics: (ii) the localization of the pain; (ii) the character of the pain; (iii) the radiation of the pain and the (iv) associated symptoms. A partial score is associated with each characteristic, and the sum of these values produces a final score that predicts the angina probability. A chest pain score lower than four identifies a low-risk probability of coronary disease, whereas a score greater or equal than four can be classified as an intermediate-high probability of coronary risk, i.e., it is required to admit the patient for clinical observation. Different values of the rate correspond to different clinical treatments to be followed by the patient.

3. Enactment of Clinical Guidelines with TESTMED

The main challenge tackled by the TESTMED project was to reduce the gap between the fully automated solutions provided by the BPM community and the clear difficulties of applying a traditional process management approach in the healthcare context. To realize this vision, the major outcome of the project was the development of a clinical PAIS, referred to as a TESTMED system, enabling the interpretation and execution of CGs and their presentation to doctors and medical staff through multimodal user interfaces.
The TESTMED system is thought to be used when a patient suffering from a medical condition (amenable to a CG) asks for a visit. Doctors are provided with a tablet PC (supporting touch and vocal interaction) that runs the TESTMED system. Thus, the doctor is enabled to select, instantiate, and carry out specific CGs.
For example, in the case of chest pain, the doctor starts filling a survey for determining the severity of the patient’s medical condition, which is expressed through a chest pain score (cf. also Section 2.3). The survey is presented to the doctor on the graphical user interface (GUI) of the tablet PC where the TESTMED system is installed (see Figure 4a). The interaction can be performed by exploiting the touch features of the tablet, or, vocally, through integrated speech synthesis and recognition. The grey icon with a microphone that is located at the top-right of the GUI in Figure 4a is shown only when the interaction shifts from touch to vocal. It serves as a visual feedback that the vocal interaction is properly working.
The vocal interaction requires that the doctor wears a headset (The use of a headset guarantees both a higher quality of the vocal interaction in a noisy environment like the hospital ward’s one, and that the privacy of the visited patient is preserved) with a microphone linked to the tablet; s/he can listen to the questions related to the survey and reply vocally by choosing one of the speech-synthesized possible answers. Each answer is associated with a specific characteristic and provides an associated rate.
Once the survey is completed, the TESTMED system elaborates a dedicated therapy including a sequence of clinical treatments and analysis prescribed to the patient. The therapy is structured in the form of a care pathway. As an instance, when the chest pain score is greater than 4, the suggested care pathway is the one shown in Figure 5. For the sake of readability, we have modeled the care pathway in the Business Process Modeling Notation (BPMN is a standard ISO/IEC 19510:2013, cf. https://www.iso.org/standard/62652.html, to model business processes). The reader should notice that BPMN is not the notation employed to concretely represent and encode a CG in the TESTMED system (to this aim, we used PROforma language [34], as explained in Section 4), but it is used here to show (in a comprehensive way) how care pathways usually look like.
The care pathway in Figure 5 includes, first of all, that the patient is subjected to some general blood analysis, which must be repeated for a second time after four hours. Once the analysis results are ready, the doctors assess them to decide if the patient should be hospitalized or not. Specifically, if the results are not good, the care pathway in Figure 5 provides instructions on performing further tests to the patient (in this case, a hemodynamics consulting and a coronary catheterization) and, based on the results obtained, to activate a further procedure concerning the hospitalization of the patient. On the other hand, if the analysis results are good, after 8–12 h, the patient is subjected (again) to some general blood analysis, whose results drive the next clinical steps to be performed to the patient, according to the care pathway in Figure 5.
The enactment of the various clinical tasks takes place in different moments of the therapy. Furthermore, a collaboration between doctors and medical staff is crucial to enact the proper medical treatments for each patient. The components of the medical staff (i.e., nurses and general practitioners) are equipped with Android-based mobile devices and are notified of the progress of care pathways and of the clinical tasks that have to be enacted for supporting doctors (e.g., to perform a blood analysis to the patient, etc.). Figure 6 shows two screenshots of the GUI provided to the medical staff, which only allows for tactile interaction.
The TESTMED system provides the ability to properly orchestrate the clinical tasks, assigning them to (available) doctors or members of the medical staff, and to keep track of the status of the care pathway, by recording the results of the analysis and and doctors’ decisions. Reminders and notifications alert doctors and the medical staff if new data (e.g., the results of some analysis is ready to be analyzed—see Figure 4b) are available for some patient. If this is the case, the doctor can decide to visualize further details about the analysis results and the execution status of the care pathway or simply accept the notification. It is worth noticing that the doctor can abort the enactment of the care pathway in any moment.

4. The Architecture of the TESTMED System

The TESTMED system is based on three main architectural components: a graphical user interface, a back-end engine, and a task handling server. Figure 7 shows an overall view of the system architecture.
The system is implemented employing a multimodal user interface. On the one hand, doctors interact with a GUI (see Figure 4) that is specifically designed for being executed on large mobile devices (e.g., tablets), and allows for tactile or vocal interaction. In particular, vocal interaction enables doctors to work flexibly in clinical scenarios where their visual and haptic attention (i.e., eyes and hands) are mainly busy with the patient’s visit. On the other hand, the GUI provided to members of the medical staff is thought to be visualized on small mobile devices (e.g., smartphones) and provides only a tactile interaction (see Figure 6).
The back-end engine with its services provides the ability to interrupt, activate, execute and monitor CGs and relevant data between doctors and the medical staff at run-time. In TESTMED, a collection of various programming languages is used to define each CG. First, the PROforma language [34] is utilized to fulfill the role of modeling each CG into a set of clinical activities, data items, and the control flow between them. Then, starting from the resulted PROforma model, a configuration file is semi-automatically built in XML language to define all the necessary settings by which the multimodal interaction functionality is allowed and the integration of different system components is enabled. As a result, a CG will be finally represented as a guideline bean, which will be deployed into the system and ready for execution.
The execution of CGs is completed with a precise routing of data, set of events and clinical activities, which all follow a well-defined process-aware and content-based approach in which activities are scheduled and messages are dispatched in an event- and data-driven way. The back-end engine manages and controls the routing process of all clinical activities, related data, and produced events between the corresponding parts including actors, services, and applications. Therefore, it guarantees a successful interaction among all participating units and services. Moreover, any software module that communicates with the engine for completing any defined activity can be viewed as an external service to be invoked when needed.
Basically, services are considered wrappers over pre-existing legacy systems, such as the Electronic Medical Record (EMR) systems employed in hospitals.
The routing engine performance depends on a primary scheduling unit which handles the accomplishment of activities that require temporal constraints (e.g., examinations and analytical experimental tests which should be designed and conducted in a timely manner), and communicates with the EMR system in order to (i) search and query medical and administrative patient information, (ii) organize and plan examinations, laboratory tests, medicine receipts, etc. according to the related medical procedure, and (iii) get notified about events and laboratory testing results so that it can be forwarded to the assigned doctors. This interoperable work with the EMR system is realized by employing the Health Level 7 (HL7) standard protocol (HL7 is a set of international standards for transfer of clinical and administrative data between hospital information systems. http://www.hl7.org/). The analyzing, processing and creation of HL7 messaging packets is organized and controlled by a specific HL7 processing unit.
It is worth noticing that all the conducted activities while performing a CG should be stored and registered, in order to keep track and retrieve accordingly all the events, activities and data, which relate to the medical case and its decision-making scenarios. This registered information might be potentially utilized for: (i) creating medical and analytical reports; (ii) documenting the suggested and followed care plan assigned to each patient so that it can be used as a legal reference in the future; (iii) providing a database platform that maintains all the medical records and chosen treatment scenarios for all patients, which can be exploited to introduce a better and improved version of the documented CG after running further analysis on all these collected data; and (iv) providing valuable support for forensic analysis [35].
From a technical perspective, the multimodal interaction feature is developed with the utilization of different technologies including the Text-To-Speech (TTS) engines, the Microsoft Automatic Speech Recognition (ASR) and the Multi-touch for Java framework (MT4j, http://www.mt4j.org/). As for the back-end, it is implemented using the Tallis engine (http://archive.cossac.org/tallis/Tallis_Engine.htm), which is able to handle and manage the compatibility with the legacy systems installed in the hospital Policlinico Umberto I. This compatibility is facilitated and deployed using HL7 messages over Mirth (http://www.mirthcorp.com/products/mirth-connect). Each of the above-mentioned software parts is J2EE-based and hosted on a TomEE (http://tomee.apache.org/apache-tomee.html) application server. In particular, the communication between the back-end and the GUI of the assigned doctor is performed by a JMS-based notification engine, called RabbitMQ (http://www.rabbitmq.com/). Finally, Apache Camel https://camel.apache.org/) is used as rule and mediation engine in order to orchestrate all the above technologies and components.
Finally, a task handling server is in charge of communicating with both the back-end and the existing legacy systems via HL7 and RESTful messages. This server has the important role of informing the medical staff about when a clinical activity is needed to be performed in the context of a specific CG. On the other hand, medical staff members, as previously discussed, are provided with a dedicated front-end Android application that employs RESTful services to interact with the task handling server.
All the logic of a CG is therefore coded via the PROforma model and various XML-based configuration files. Those files describe the data to be provided by the user interfaces, the queues and messages (JMS and HL7) to be exchanged, the routing and the scheduling of the different interactions, etc. (In particular, all those files instruct the routing and mediation rules enacted by the Apache Camel framework.) All these files are bundled in a guideline bean—a zipped archive that is then disassembled by the system and used to instruct the different components. For each new guideline to be deployed and enacted by the system, a new PROforma model and related XML-based files should be produced by the system engineer, on the basis of the BPMN process describing the care pathway (as the one of Figure 5 for the chest pain). Therefore, analogous to many middleware technologies and process-aware tools, the TESTMED system is general-purpose (no new code should be written when deploying a new CG) but requires technical configuration by system engineers, who, on the basis of the requirements of the specific care pathway to be implemented, design and deploy the guideline bean.

5. User Evaluation

The TESTMED system has been thought to be used in hospital wards for supporting doctors in the execution of CGs. In this context, medical staff and doctors must work in collaboration and coordination to perform the appropriate clinical activities to the patients. Hence, providing a satisfactory mobile interaction is crucial, as it allows for:
  • supporting the mobility of doctors for visiting the patients;
  • facilitating the information flow continuity by supporting instant and mobile access;
  • speeding up doctors’ work while executing CGs and performing clinical decision-making.
The latter point is also confirmed by a survey carried out by the Price Waterhouse Coppers’ Health Research Institute (HRI) [36], which reported that 56% of doctors—over a large sample—were able to improve and speed up their decision-making thanks to the use of mobile technologies.
Despite the utilization of mobile devices and applications may significantly empower the ability of doctors and medical staff to collaborate and coordinate themselves, there are still some key challenges to be addressed. Among them, one of the most relevant challenges consist of realizing a GUI that is able to represent in a compact yet understandable way the description underlying a clinical activity and, at the same time, does not distract the doctors from visiting the patients [37].
To achieve this objective, we realized the TESTMED system leveraging the user-centered design (UCD) methodology [20], which places the end users at the center of any design and development activity. To this end, we initially developed two mockups of the system (during months 4 and 9 of the project, respectively). Various usability studies (including thinking aloud techniques, focus groups, etc.) have been conducted on each mockup with real doctors, and the results of each user study have been used for incrementally improving the design of the GUI of the system. One of the main effects of applying UCD methodology was the introduction of the vocal interface in the second mockup, together with the basic touch interface that was solely presented in the first mockup. This was due to the fact that the users’ feedback on the first mockup indicated the need for the doctors to have their hands free while visiting a patient. That’s why we introduced the possibility to (also) vocally interact with the GUI.
On the basis of the outcomes of the above usability studies, we have iteratively produced two working prototypes of the system in months 12 and 18 of the project, respectively. We assessed them employing well-established evaluation methods involving the target users (i.e., real doctors). Results and findings of the user evaluation performed over the working prototypes are discussed in the next sections.

5.1. Evaluation Setting and Results of the First User Study

The two developed working prototypes have been tested with patients suffering from chest pain (cf. Section 2.3), and therefore the related CG has been modeled, configured and deployed on the system to perform the testing.
The initial user study was performed in Policlinico Umberto I hospital in Rome with the help of the Department of Emergency and Admissions (DEA). In Figure 8, a doctor is shown using the TESTMED system to enact the CG on a patient simulator. In this experiment, five postgraduate medical students and two doctors participated in the study. Given a patient simulator that was supposed to reflect a real patient with chest pain symptoms, the participants were asked to use the TESTMED system to visit the patient according to the related CG (see Figure 8).
After the completion of the user testing, a questionnaire was provided to the participants with the aim to gather their background information and collect data about how they perceived the interaction with the system. Specifically, the questionnaire consisted of 11 statements covering aspects like easiness of use of the GUI, quality of the multimodal interaction, etc. The answers were evaluated through a 5-point Likert scale, which ranged from 1—strongly disagree to 5—strongly agree, to reflect how participants agreed/disagreed with the defined statements:
Q1
I have a good experience in the use of mobile devices.
Q2
The interaction with the system does not require any special learning ability.
Q3
I judge the interaction with the touch interface very satisfying.
Q4
I judge the interaction with the vocal interface very satisfying.
Q5
I think that the ability of interacting with the system through the touch interface or through the vocal interface is very useful.
Q6
The system can be used by non-expert users in the use of mobile devices.
Q7
The system allows for constantly monitoring the status of clinical activities.
Q8
The system correctly drives the clinicians in the performance of clinical activities.
Q9
The doctor may—at any time—access data and information relevant to a specific clinical activity.
Q10
The system is robust with respect to errors.
Q11
I think that the use of the system could facilitate the work of a doctor in the execution of its activities.
Table 1 summarizes the results of the first user study. From such results, we can infer that the general attitude of the participants towards our system was positive. Results put in light that participants have considered the system as effective in the enactment of CGs, since it was able to concretely orchestrate doctors with executing the clinical activities included in the CG (cf. results of Q8). Furthermore, the system allowed doctors to constantly monitor the status of each clinical activity (cf. results of Q7) and to easily access information relevant to the specific activity under execution (cf. results of Q9). Participants also showed a fair amount of satisfaction with how the system behaves with respect to error handling (cf. results of Q10), learnability (cf. results of Q2), and ease of use for non-expert users (cf. results of Q6).
On the negative side, the interaction with the vocal interface was considered as quite unsatisfactory (cf. results of Q4), while high satisfaction was experienced using the touch interface (cf. results of Q3). Nonetheless, the participants agreed that a multimodal interaction involving both touch and vocal interface could be useful to facilitate the work of doctors (cf. results of Q5). It is worth noticing that the questionnaire allowed participants to also add feedback and comments in free text. Using this feature, five out of seven participants explicitly asked us to develop an improved vocal interaction for the system, enabling doctors to dynamically switch the interaction modality (from vocal to touch, or vice versa) when needed.
The responsiveness of the GUI is another aspect that was investigated in the range of the first working prototype. In this direction, further tests were performed to calculate the time required by doctors to perform a single step of the survey associated with the CG deployed into the system (see also Section 2.3). Specifically, we assumed that a doctor completed a single step in the survey when s/he passed from one scene (in TESTMED, as discussed in Section 4, MT4j is exploited to build the GUI frames of the system, referred to as scenes, which enable to handle (multi)touch input events) of the GUI to the next one by answering the corresponding question of the survey. We monitored the time required by doctors to complete any scene associated with the CG’s survey, until its completion.
We ran this test twice: firstly, using the touch interface, and secondly using the vocal interface. A summary of the collected results is shown in Figure 9, where each scene transition is represented on the x-axis, and the corresponding time required for generating the new scene and displaying it on the screen is on the y-axis. In our case study, which was focused on the chest pain CG, we needed three scene transitions before generating the final chest pain score.
The tests were performed using an ACER Iconia Tab W500 (ACER Inc., Xizhi, Taiwan) with a 1 Ghz AMD CPU and 2 GHz of RAM, which was running Windows 7 OS (Microsoft Redmond, Redmond, Washington, USA). With the exclusive use of the touch interface, the analysis shows an average time of 400 ms for completing a scene transition, compared to 6–700 ms required when just the vocal interface is used. The reason of the delay caused by using the vocal interface is due to the (extra) time needed by the system to contact the ASR engine (usually around 200–250 ms). Nonetheless, from the user point of view, this delay has a low impact on the overall responsiveness of the system, since timing that does not exceed 700 ms. to perform a scene transition is usually considered acceptable by the users [38].

5.2. Evaluation Setting and Results of the Second User Study

The results and findings of the first user study were leveraged to refine the weak aspects of the first prototype, in order to realize a (more) robust second working prototype of the system. If compared with the first prototype, the second one provided a more elaborated design of the GUI, together with a redefinition of the interaction principles underlying the vocal interface. For example, in the first prototype, the vocal features of the system were always active during the enactment of a CG. This resulted in many “false positives”, i.e., the system wrongly recognized as proper vocal commands (by consequently activating unwanted functionalities) some words pronounced by the doctor during the patient’s visit. In the second prototype, to prevent false triggers, we decided to activate the vocal interface only after a specific (and customizable) key vocal instruction pronounced by the doctor.
Leveraging the second working prototype, we performed a second user study employing the same chest pain CG used in the first user study. In addition, the second user study took place at the DEA of Policlinico Umberto I in Rome. Seven users (different from the ones that were involved in the first user study) participated in the second user study, including six postgraduate medical students and one doctor. Like in the first user study, participants attended the patient simulator and were asked to use the second prototype of the system to enact the CG (see Figure 8). They also completed the same questionnaire employed in the first user study to assess the effectiveness of the system. The results of this second user study are shown in Table 2.
From the analysis of the results, it is evident that the good feelings obtained in the first user study about the working of the system have been confirmed by the results of this second user study. If we refer to Figure 10, we can note that participants’ ratings in the second user study increased for all statements if compared with the first user study. Moreover, the results highlighted that the design of the second prototype made considerable progress, in particular because we precisely followed for its development the traditional design guidelines for building multimodal GUIs [10].
One critical aspect was the interaction with the vocal interface (cf. statement Q4), which resulted as being quite unsatisfactory in the first user study, with an average rating of 2,7. Conversely, the improved vocal interface employed in the second working prototype was really appreciated by the participants in the study, with an average rating of 4. To confirm that the improvement of the vocal interface was not the result of a coincidence, we analyzed the ratings for the statement Q4 collected in the first and second user studies leveraging the 2-sample t-test. This statistical test is applied to compare whether the average difference between two population means is really statistically significant or if it is due instead to random chance. The results of the 2-sample t-test are summarized in Figure 11. Statistical significance of the results is determined by looking at the p-value, which gives the probability that the collected results have been obtained randomly. If p-value assumes a value of 0.05 or less, it is possible to conclude that the collected data are not due to a chance occurrence, which is the case of our data (p-value is 0.0265099). This allows us to conclude that the improvement of the vocal interface was not the result of a coincidence.
Finally, we also performed a traditional System Usability Scale (SUS) questionnaire for precisely measuring the usability of the second working prototype. SUS is one of the most widely used methodologies for post-test data collection (43% of post-tests are of the SUS type [39]). It consists of a 10-item questionnaire. Any question is evaluated with a 5-point Likert scale that ranges from 1—strongly agree to 5—strongly disagree. Once completed, an overall score is assigned to the questionnaire. Such a score can be compared with several benchmarks presented in the research literature in order to determine the level of usability of the GUI being evaluated. In our test, we made use of the benchmark presented in [39] and shown in Figure 12. From the analysis of the SUS completed by the seven participants of the second user study, the final average ratings were of 77.5 and 78.4 for the GUIs used by the doctors and medical staff, respectively, which correspond to a rank of ’B+’ in the benchmark presented in [39]. This means that the GUI of TESTMED has on average a good usability rate, even if there is still room for improvements.

6. Related Work

6.1. Process-Oriented Healthcare Systems

Notwithstanding the huge achievement of employing PAISs in many recent industrial projects together with adopting fast-growing process-oriented methodologies in various real-world scenarios [40,41,42], BPM technologies are still facing several challenges to be widely adopted in healthcare applications [14,28]. This is mainly due to the rigid structure imposed by PAISs on the process definition that restricts the flexibility of handling and managing healthcare processes, which are often affected by many variations and exceptions during their enactment [43]. To tackle this issue, BPM technologies should evolve in a way that enables an enhanced flexible management of healthcare processes, while—on the contrary—the recent research attempts are more focused to improve their automation on top of the existing PAISs [11].
In this direction, the work [44] identifies the different flexibility requirements that can be empowered by employing process management technologies in healthcare applications. The focus of this work relates to the diagnostic steps of a gynecological oncology process and its possible implementation in four different PAISs. However, the work [44] is mainly oriented to analyze flexibility requirements for well-structured healthcare processes.
The work [45] investigates the modeling of pathology processes for programmed surgical interventions using the BPMN notation. The designed models include all the clinical activities required not only during a surgical intervention, but also in the preparation/follow-up phases, which take place before/after the intervention. Note that, in [45], no supporting system for process enactment is discussed. Similarly, the work [46] defines several clinical processes in BPMN that cover all the therapy steps of a patient, from the moment of the admission in the hospital to the leaving.
A wider investigation about the promising role of BPMN for modeling healthcare processes is addressed in [47], where challenges like the multi-disciplinary nature of healthcare processes and their interoperability requirements for being executed in traditional PAISs are discussed. In [14,48], the authors consider again interoperability requirements together with service coordination and application integration as basic assets to achieve the required support to enact healthcare processes. In particular, they notice that current information systems employed in healthcare applications consist of several independent (clinical) department systems, leading to many integration issues. Despite the use of the HL7 standard mitigating such integration issues, in [14,48], the authors advocate that a commonly agreed solution that guarantees a proper integration between different clinical systems is still missing.
In [23], the authors investigate how the use of process mining techniques can enhance the functionality of EHR applications exploiting the knowledge contained in their events’ logged data while addressing various issues regarding costs and efficiency of healthcare processes management. According to [49], process mining can be classified into three major branches: (i) process discovery [50,51,52], (ii) conformance checking [53,54,55,56,57] and (iii) process enhancement [58,59,60]. Specifically, in [23], the authors argue that the enhancement of healthcare processes is directly linked to understanding the “nature” of these processes. This can be done by identifying what happens in the whole healthcare procedures (i.e., process discovery) and analyzing if they present inconsistencies or deviations with respect to the expected therapies they aim to deliver (i.e., conformance checking). Finally, inefficiencies, bottlenecks, and other issues can be detected and fixed employing process enhancement techniques.
Given a service-oriented architecture that leverages the utilization of web services, some systems have been presented to address the issue of enacting healthcare processes through the definition of service orchestration specifications as BPEL processes. For example, in [61], the authors introduce an innovative way to create web service specifications in BPEL exploiting a semi-automated model-driven approach that focuses on the administrative workflow including medical tests scheduling and tracking patient’s status from admission to discharge. Similarly, service-oriented integration, web services, and process technology are also discussed in [62] as a means to automate healthcare processes in their inter-organizational emergency scenarios. In [63], the discussion of [62] is enlarged to mobile technologies and cloud-based architectures.
In [64], the Serviceflow Management system is presented as a tool to manage entire healthcare processes, in particular when several organizational units are involved in the care delivery activity. The system provides a three-level architecture, where the upper level is the one responsible for coordinating the services available in the lower levels and handling the whole healthcare process, which is modeled as a Serviceflow. Finally, in [65], authors present an approach to support healthcare processes via a service-oriented architecture. The approach focuses on organizing the operations performed in a sterile processing department as a healthcare process, and identifies the needed architectural requirements to realize a system supporting such process. The proposed architecture is prototyped and evaluated in a sanitation working area.
If compared with the above works, which mainly provide ad hoc solutions for managing well-defined healthcare processes, the aim of TESTMED is to realize a general-purpose clinical PAIS able to interpret CGs encoded in the PROFORMA language and orchestrate their execution among doctors and medical staff through mobile technologies and multimodal user interfaces.

6.2. Mobile and Multimodal Interaction in the Healthcare Domain

Mobile and multimodal user interaction have a long history of success in many real-world settings, including emergency management [66,67,68], smart and collaborative environments [69,70,71,72], cultural heritage [73,74], and—of course—healthcare [37]. When it is employed properly, such technology can contribute not only to improve patient care delivery, but also to push towards a large adoption of mobile clinical devices in hospitals.
In this direction, in [75], Flood et al. propose a method that allows designers and developers of medical mobile applications to evaluate the implemented prototype of their applications early, in particular if their usage might produce a high cognitive effort on the end users. The proposed method includes the interruption of the application development process and the modification of the user interface design when required. Although this methodology is oriented to mitigate the perceived cognitive efforts during the usage of mobile applications, it does not introduce any novel design technique that tackles the issue of mobile multimodal interaction. Conversely, in [76], Jourde et al. acknowledge the importance of providing multimodal interactions with clinical mobile devices by proposing a specification for designing user interfaces for a multimodal collaborative healthcare system. Another interesting approach is the one proposed by [77], which consists of a smart mobile device with a software system supporting mobile interaction that handles auditing tasks and empowers medical staff of emergency rooms to establish seamless clinical handover procedures. Finally, the GuideView system [78,79] suggests the importance of adequate mobile interactions for astronauts in their space exploration missions, especially when they need to deliver medical care to themselves, in situations where professional medical assistance from earth is impossible. It is worth noticing that none of the above works are intended for improving CG modeling or execution.
To summarize and confirm what was stated in Section 1, the TESTMED system aims to provide the following requirements: ensuring information flow continuity by supporting mobile access to information, empowering doctors while executing CGs, and improving and sustaining effective design of mobile multimodal interaction in order to alleviate the physical and cognitive overload on the medical staff and doctors. The recent works including the above-mentioned ones have mainly targeted just a single requirement or a partial combination of them.

6.3. Vocal Interfaces

Computers’ input and output interfaces have been evolving in the last few years to provide humans with an improved user experience while interacting with and collaborating through computers during different kinds of tasks. One of the drivers of this evolution is providing interfaces that exploit all human senses, with a particular emphasis given to visual, verbal, and tactile interactions, which are considered to provide the most realistic dialog between the human and the computer. Thanks to the recent maturity and wide availability of smart mobile devices, which have become smaller, more portable, and more powerful, the call for more effective interfaces besides traditional ones has become even more urgent. As a consequence, there are a considerable number of recent research works discussing current limitations and possible opportunities about the so-called multimodal interaction. Involved technologies include haptic devices, such as digital pen, fingerprint scanners and 3D gestures, and advanced vocal and visual interfaces such as voice commands and face recognition. In particular, these technologies must be integrated with the aim of providing target users with the interaction that best suits their needs.
Vocal interfaces represent a widely employed interaction model, demonstrating their usefulness in various scenarios such as home automation, car driving and manufacturing processes. In an early patent [80], authors developed an interactive speech application in the context of telephone systems, where interactive dialog tasks with users (i.e., callers) can be defined in the form of dialog modules where user input is provided in the form of verbal commands. The very same approach can be employed in the context of smart mobile devices where display screens are relatively small and vocal input would support easier, faster and more efficient task execution compared with the sole employment of the touch interface. An example of this approach is presented in [81], where authors suggested to use a voice-controlled interface to overcome the task of reading large amounts of text (e.g., selection menus) and typing responses using a small touch keyboard. The work presented in [82] considers the case of crisis management and suggested that providing multimodal interfaces including vocal interaction would be highly valuable in such environments, leading to solving issues in a shorter time while ensuring a better collaboration among team members. On the other side, authors in [83] analyzed the performance of available automatic speech recognition (ASR) techniques and indicated the required level for speech to become a truly pervasive user interface. The employment of new, very precise, recognition techniques, based on neural networks has finally made the employment of voice based interface ubiquitous.
Recently, many commercial prototypes of vocal interfaces have been introduced in the market. This can be easily noticed in many mobile operating systems, where the vocal interface (as a natural language user interface) is provided to serve as an intelligent personal assistant and knowledge navigator. Such systems include Siri, Google Assistant and Cortana. These programs listen to user vocal input, interpret it as verbal commands, and respond back to the user using text to speech, thus imitating human-to-human vocal interaction. Unfortunately, when the system was developed, these systems were not available, so we employed the Text-To-Speech (TTS) engine and the Microsoft Automatic Speech Recognition (ASR), as discussed in Section 4.
In the context of health support systems, vocal interfaces have been employed in several applications. Authors in [84] suggested using the audio input after detecting the fall of elderly persons who live alone. Here, the system is implemented on a mobile device and an accelerometer is used to identify a suspected fall. At that point, the user can ask for immediate help vocally, as typing using the keypad could be impractical in such a critical situation. Similarly, following the direction of tele-home health care, authors in [85] examined the use of voice as a means for obtaining patients’ emotions while being monitored remotely, which is a context where even a higher level of skills, professionalism, and competence is needed with respect to an in-person visit. Here, authors pointed out that detecting patients’ emotions using the voice could be even more accurate, as patients themselves find difficulties to express precisely their own state of feeling, and therefore emphasized the importance of developing such multimodal intelligent effective interface. A similar work was conducted in the field of psychopathology [86], where clinical diagnosis of major depression cases was applied using facial expressions and voice analysis to help automatic detection of depression whose assessment is usually based on classical reports (e.g., clinical interviews and questionnaires).
Authors in [87] discussed the employment of robots in healthcare structures (e.g., hospitals), and proposed to apply vocal interaction with robots moving around the healthcare center, for better and efficient execution of tasks in such environments.
In their work of presenting multimodal integration of continuously spoken language and continuous gesture, authors in [88] included in their prototype an example of medical informatics in which users (patients in this case) can search using speech and gesture for available healthcare providers on a map of their zone. The multimodal input in this prototype is translated into a query that is executed on the database of all healthcare providers (e.g., doctors) in the selected area, and retrieve back results that can be then showed directly on the map. In the attempt of establishing a health-aware smart home that serves as an intelligent space assisting elderly and users with special needs in their daily life activities, authors in [89] argued that voice interfaces can be helpful as they do not require to be worn or to be spatially close to any device.
In the context of navigational assistance for surgical interventions, authors in [90] developed a prototype that uses virtual reality together with a multimodal interface (including vocal input) and an expert system that helps to infer context in order to provide support for surgeons, either in a process simulation or in a realistic intervention, and for both training and clinical purposes.
Our work can be positioned in this research context, where the introduction of a mobile application, which supports conducting the needed related clinical guidelines while providing a multimodal interaction with a vocal interface, can lead to significant benefits for both patients and clinicians. These benefits can be easily noticed when clinicians working with these IT systems use the vocal input in a hands-free mode, which in turn allows for conducting the healthcare tasks easier and faster in such environment and helps medical staff while they physically examine their patients in order to improve the overall clinical performance.

7. Conclusions

In this paper, we have presented the main outcomes of the TESTMED project, which aimed at realizing a clinical PAIS supporting doctors during the enactment of CGs in hospital wards, through the interplay of advanced GUIs deployed on mobile devices that provide touch and vocal interaction with the system. The TESTMED system has been designed through the UCD methodology [20] and evaluated in the emergency room of DEA of Policlinico Umberto I, which is the main hospital in Rome (Italy). This allowed us to demonstrate that the adoption of mobile devices providing multimodal GUIs coupled with a process-oriented execution of clinical tasks represent a valuable solution to support doctors in the execution of CGs.
As a future work, we will empower the system to support further CGs, such as syncope and dyslipidaemias, to make it usable in more clinical circumstances. Furthermore, we plan to realize a precise methodology that explains how to model CGs as PROforma processes. Finally, we plan to test the system for longer periods of time, enabling just a single interaction modality for doctors (exclusively touch or vocal). This will allow us to understand if the voice interaction feature makes the system more usable and effective than using the traditional touch features of the GUI.

Author Contributions

Conceptualization, F.L., A.M., M.M. and T.C.; methodology, A.M. and M.M.; software, M.S. and F.L.; validation, M.S., A.M. and T.C.; formal analysis, M.S., F.L. and A.M.; investigation, M.S. and F.L.; resources, M.M. and T.C.; data curation, F.L. and A.M.; writing—original draft, M.S., F.L., A.M., M.M. and T.C.; writing—review & editing, M.S., F.L. and A.M.; visualization, M.S., F.L. and A.M.; supervision, A.M., M.M. and T.C.; project administration, M.M. and T.C.; funding acquisition, M.M. and T.C.

Funding

This research received no external funding.

Acknowledgments

This multi-year research work has been supported by the Sapienza grants TESTMED and SUPER and performed also in the context of the Centro Interdipartimentale “Information-Based Technology Innovation Center for Health” (STITCH).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van De Belt, T.H.; Engelen, L.J.; Berben, S.A.; Schoonhoven, L. Definition of Health 2.0 and Medicine 2.0: A systematic review. J. Med. Internet Res. 2010, 12. [Google Scholar] [CrossRef] [PubMed]
  2. Chaudhry, B.; Wang, J.; Wu, S.; Maglione, M.; Mojica, W.; Roth, E.; Morton, S.C.; Shekelle, P.G. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann. Intern. Med. 2006, 144, 742–752. [Google Scholar] [CrossRef] [PubMed]
  3. Cook, R.; Foster, J. The impact of Health Information Technology (I-HIT) Scale: The Australian results. In Proceedings of the 10th International Congress on Nursing Informatics, Helsinki, Finland, 28 June–1 July 2009; pp. 400–404. [Google Scholar]
  4. Buntin, M.B.; Burke, M.F.; Hoaglin, M.C.; Blumenthal, D. The benefits of health information technology: A review of the recent literature shows predominantly positive results. Health Aff. 2011, 30, 464–471. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, J.; McCullough, J.S.; Town, R.J. The impact of health information technology on hospital productivity. RAND J. Econ. 2013, 44, 545–568. [Google Scholar] [CrossRef]
  6. Shachak, A.; Hadas-Dayagi, M.; Ziv, A.; Reis, S. Primary care physicians use of an eletronic medical recors system: A cognitive task analysis. J. Gen. Intern. Med. 2009, 24, 341–348. [Google Scholar] [CrossRef] [PubMed]
  7. Richtel, M. As doctors use more devices, potential for distraction grows. The New York Times, 14 December 2011. [Google Scholar]
  8. Booth, N.; Robinson, P.; Kohannejad, J. Identification of high-quality consultation practice in primary care: the effects of computer use on doctor–patient rapport. Inform. Prim. Care 2004, 12, 75–83. [Google Scholar] [CrossRef] [PubMed]
  9. Margalit, R.S.; Roter, D.; Dunevant, M.A.; Larson, S.; Reis, S. Electronic medical record use and physician–patient communication: An observational study of Israeli primary care encounters. Patient Educ. Couns. 2006, 61, 134–141. [Google Scholar] [CrossRef] [PubMed]
  10. Laxmisan, A.; Hakimzada, F.; Sayan, O.R.; Green, R.A.; Zhang, J.; Patel, V.L. The multitasking clinician: Decision-making and cognitive demand during and after team handoffs in emergency care. Int. J. Med. Inform. 2007, 76, 801–811. [Google Scholar] [CrossRef] [PubMed]
  11. Reichert, M. What BPM Technology Can Do for Healthcare Process Support. In Artificial Intelligence in Medicine; Peleg, M., Lavrač, N., Combi, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 2–13. [Google Scholar]
  12. Oviatt, S.; Coulston, R.; Lunsford, R. When do we interact multimodally?: Cognitive load and multimodal communication patterns. In Proceedings of the 6th International Conference on Multimodal Interfaces, State College, PA, USA, 13–15 October 2004; pp. 129–136. [Google Scholar]
  13. Pieh, C.; Neumeier, S.; Loew, T.; Altmeppen, J.; Angerer, M.; Busch, V.; Lahmann, C. Effectiveness of a multimodal treatment program for somatoform pain disorder. Pain Pract. 2014, 14. [Google Scholar] [CrossRef] [PubMed]
  14. Lenz, R.; Reichert, M. IT support for healthcare processes–premises, challenges, perspectives. Data Knowl. Eng. 2007, 61, 39–58. [Google Scholar] [CrossRef]
  15. Mans, R.S.; van der Aalst, W.M.P.; Russell, N.C.; Bakker, P.J.M.; Moleman, A.J. Process-Aware Information System Development for the Healthcare Domain—Consistency, Reliability, and Effectiveness. In Business Process Management Workshops: BPM 2009 International Workshops, Ulm, Germany, September 7, 2009. Revised Papers; Rinderle-Ma, S., Sadiq, S., Leymann, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 635–646. [Google Scholar]
  16. Sonnenberg, F.A.; Hagerty, C.G. Computer-Interpretable Clinical Practice Guidelines. Where are we and where are we going? Yearb. Med. Inform. 2006, 45, 145–158. [Google Scholar]
  17. Peleg, M.; Tu, S.; Bury, J.; Ciccarese, P.; Fox, J.; Greenes, R.; Hall, R.; Johnson, P.D.; Jones, N.; Kumar, A.; et al. Comparing Computer-Interpretable Guideline Models: A Case-Study Approach. J. AMIA 2003, 10, 52–68. [Google Scholar] [CrossRef] [PubMed]
  18. De Clercq, P.A.; Blom, J.A.; Korsten, H.H.; Hasman, A. Approaches for creating computer-interpretable guidelines that facilitate decision support. Artif. Intell. Med. 2004, 31. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, D.; Peleg, M.; Tu, S.; Boxwala, A.; Greenes, R.; Patel, V.; Shortliffe, E. Representation Primitives, Process Models and Patient Data in Computer-Interpretable Clinical Practice Guidelines: A Literature Review of Guideline Representation Models. Int. J. Med. Inform. 2002, 68, 59–70. [Google Scholar] [CrossRef]
  20. Dix, A.; Finlay, J.; Abowd, G.; Beale, R. Human-Computer Interaction; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1997. [Google Scholar]
  21. Cossu, F.; Marrella, A.; Mecella, M.; Russo, A.; Bertazzoni, G.; Suppa, M.; Grasso, F. Improving Operational Support in Hospital Wards through Vocal Interfaces and Process-Awareness. In Proceedings of the 25th IEEE International Symposium on Computer-Based Medical Systems (CBMS 2012), Rome, Italy, 20–22 June 2012. [Google Scholar]
  22. Cossu, F.; Marrella, A.; Mecella, M.; Russo, A.; Kimani, S.; Bertazzoni, G.; Colabianchi, A.; Corona, A.; Luise, A.D.; Grasso, F.; et al. Supporting Doctors through Mobile Multimodal Interaction and Process-Aware Execution of Clinical Guidelines. In Proceedings of the 2014 IEEE 7th Internaonal Conference on Service-Oriented Computing and Applications, Matsue, Japan, 17–19 November 2014; pp. 183–190. [Google Scholar]
  23. Mans, R.S.; van der Aalst, W.M.; Vanwersch, R.J. Process Mining in Healthcare: Evaluating and Exploiting Operational Healthcare Processes; Springer: Berlin, Germany, 2015. [Google Scholar]
  24. Lillrank, P.; Liukko, M. Standard, routine and non-routine processes in health care. Int. J. Health Care Qual. Assur. 2004, 17, 39–46. [Google Scholar] [CrossRef]
  25. Gupta, D.; Denton, B. Appointment scheduling in health care: Challenges and opportunities. IIE Trans. 2008, 40, 800–819. [Google Scholar] [CrossRef]
  26. Vissers, J.; Beech, R. Health Operations Management: Patient Flow Logistics in Health Care; Psychology Press: Hove, UK, 2005. [Google Scholar]
  27. Swenson, K.D. (Ed.) Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize the Way that Knowledge Workers Get Things Done; Meghan-Kiffer Press: Tampa, FL, USA, 2010; Chapter 8. [Google Scholar]
  28. Russo, A.; Mecella, M. On the evolution of process-oriented approaches for healthcare workflows. Int. J. Bus. Process Integr. Manag. 2013, 6, 224–246. [Google Scholar] [CrossRef]
  29. Di Ciccio, C.; Marrella, A.; Russo, A. Knowledge-Intensive Processes: Characteristics, Requirements and Analysis of Contemporary Approaches. J. Data Semant. 2015, 4, 29–57. [Google Scholar] [CrossRef]
  30. Peleg, M.; Tu, S.W. Design patterns for clinical guidelines. Artif. Intell. Med. 2009, 47, 1–24. [Google Scholar] [CrossRef]
  31. Field, M.J.; Lohr, K.N. Clinical Practice Guidelines: Directions for a New Program; Institute of Medicine: Washington, DC, USA, 1990. [Google Scholar]
  32. Isern, D.; Moreno, A. Computer-based execution of clinical guidelines: A review. Int. J. Med. Inform. 2008, 77, 787–808. [Google Scholar] [CrossRef]
  33. Ottani, F.; Binetti, N.; Casagranda, I.; Cassin, M.; Cavazza, M.; Grifoni, S.; Lenzi, T.; Lorenzoni, R.; Sbrojavacca, R.; Tanzi, P.; et al. Percorso di valutazione del dolore toracico—Valutazione dei requisiti di base per l’implementazione negli ospedali italiani. G. Ital. Cardiol. 2009, 10, 46–63. [Google Scholar]
  34. Sutton, D.R.; Fox, J. The syntax and semantics of the PROforma guideline modeling language. J. Am. Med. Inform. Assoc. 2003, 10, 433–443. [Google Scholar] [CrossRef] [PubMed]
  35. Slavich, G.; Buonocore, G. Forensic medicine aspects in patients with chest pain in the emergency room. Ital. Heart J. Suppl. 2001, 2, 381–384. [Google Scholar] [PubMed]
  36. Coopers, Price Waterhouse Light Research Institute. Healthcare Unwired: New Business Models Delivering Care Anywhere; PricewaterhouseCoopers: London, UK, 2010. [Google Scholar]
  37. Chatterjee, S.; Chakraborty, S.; Sarker, S.; Sarker, S.; Lau, F.Y. Examining the success factors for mobile work in healthcare: A deductive study. Decis. Support Syst. 2009, 46, 620–633. [Google Scholar] [CrossRef]
  38. Oviatt, S.; Cohen, P.R. The paradigm shift to multimodality in contemporary computer interfaces. Synth. Lect. Hum. Centered Inform. 2015, 8, 1–243. [Google Scholar] [CrossRef]
  39. Sauro, J.; Lewis, J.R. Quantifying the User Experience: Practical Statistics for User Research; Morgan Kaufmann: Burlington, MA, USA, 2016. [Google Scholar]
  40. De Leoni, M.; Marrella, A.; Russo, A. Process-aware information systems for emergency management. In Proceedings of the European Conference on a Service-Based Internet, Ghent, Belgium, 13–15 December 2010; pp. 50–58. [Google Scholar]
  41. Marrella, A.; Mecella, M.; Sardina, S. Intelligent Process Adaptation in the SmartPM System. ACM Trans. Intell. Syst. Technol. 2016, 8, 25. [Google Scholar] [CrossRef]
  42. Marrella, A.; Mecella, M.; Sardiña, S. Supporting adaptiveness of cyber-physical processes through action-based formalisms. AI Commun. 2018, 31, 47–74. [Google Scholar] [CrossRef]
  43. Dadam, P.; Reichert, M.; Kuhn, K. Clinical Workflows—The killer application for process-oriented information systems. In BIS 2000; Springer: London, UK, 2000; pp. 36–59. [Google Scholar]
  44. Mans, R.S.; van der Aalst, W.M.P.; Russell, N.C.; Bakker, P.J.M. Flexibility Schemes for Workflow Management Systems. In Business Process Management Workshops; Springer: Berlin/Heidelberg, Germany, 2008; pp. 361–372. [Google Scholar]
  45. Rojo, M.G.; Rolon, E.; Calahorra, L.; Garcia, F.O.; Sanchez, R.P.; Ruiz, F.; Ballester, N.; Armenteros, M.; Rodriguez, T.; Espartero, R.M. Implementation of the Business Process Modelling Notation (BPMN) in the modelling of anatomic pathology processes. Diagn. Pathol. 2008, 3 (Suppl. 1), S22. [Google Scholar] [CrossRef]
  46. Strasser, M.; Pfeifer, F.; Helm, E.; Schuler, A.; Altmann, J. Defining and reconstructing clinical processes based on IHE and BPMN 2.0. Stud. Health Technol. Inform. 2011, 169, 482–486. [Google Scholar]
  47. Ruiz, F.; Garcia, F.; Calahorra, L.; Llorente, C.; Goncalves, L.; Daniel, C.; Blobel, B. Business process modeling in healthcare. Stud. Health Technol. Inform. 2012, 179, 75–87. [Google Scholar]
  48. Alexandrou, D.; Mentzas, G. Research Challenges for Achieving Healthcare Business Process Interoperability. In Proceedings of the 2009 International Conference on eHealth, Telemedicine, and Social Medicine, Cancun, Mexico, 1–7 Feburary 2009; pp. 58–65. [Google Scholar]
  49. Van der Aalst, W.; Adriansyah, A.; De Medeiros, A.K.A.; Arcieri, F.; Baier, T.; Blickle, T.; Bose, J.C.; van den Brand, P.; Brandtjen, R.; Buijs, J.; et al. Process mining manifesto. In Proceedings of the International Conference on Business Process Management, Clermont-Ferrand, France, 29 August–2 September 2011; pp. 169–194. [Google Scholar]
  50. Van der Aalst, W.M.P.; Weijters, T.; Maruster, L. Workflow Mining: Discovering Process Models from Event Logs. IEEE Trans. Knowl. Data Eng. 2004, 16, 1128–1142. [Google Scholar] [CrossRef]
  51. Van der Aalst, W.M. Process Mining: Data Science in Action; Springer: Berlin, Germany, 2016. [Google Scholar]
  52. Augusto, A.; Conforti, R.; Dumas, M.; Rosa, M.L.; Maggi, F.M.; Marrella, A.; Mecella, M.; Soo, A. Automated Discovery of Process Models from Event Logs: Review and Benchmark. IEEE Trans. Knowl. Data Eng. 2018. [Google Scholar] [CrossRef]
  53. Van der Aalst, W.M.P.; Adriansyah, A.; van Dongen, B.F. Replaying history on process models for conformance checking and performance analysis. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2012, 2, 182–192. [Google Scholar] [CrossRef]
  54. De Giacomo, G.; Maggi, F.M.; Marrella, A.; Patrizi, F. On the Disruptive Effectiveness of Automated Planning for LTLf-Based Trace Alignment. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 3555–3561. [Google Scholar]
  55. De Leoni, M.; Marrella, A. Aligning Real Process Executions and Prescriptive Process Models through Automated Planning. Expert Syst. Appl. 2017, 82, 162–183. [Google Scholar] [CrossRef]
  56. De Giacomo, G.; Maggi, F.M.; Marrella, A.; Sardiña, S. Computing Trace Alignment against Declarative Process Models through Planning. In Proceedings of the Twenty-Sixth International Conference on Automated Planning and Scheduling (ICAPS 2016), London, UK, 12–17 June 2016; pp. 367–375. [Google Scholar]
  57. De Leoni, M.; Lanciano, G.; Marrella, A. Aligning Partially-Ordered Process-Execution Traces and Models Using Automated Planning. In Proceedings of the Twenty-Eight International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherlands, 24–29 June 2018; pp. 321–329. [Google Scholar]
  58. Maggi, F.M.; Corapi, D.; Russo, A.; Lupu, E.; Visaggio, G. Revising Process Models through Inductive Learning. In Business Process Management Workshops; zur Muehlen, M., Su, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 182–193. [Google Scholar]
  59. Fahland, D.; van der Aalst, W.M. Repairing process models to reflect reality. In Proceedings of the International Conference on Business Process Management, Tallinn, Estonia, 3–6 September 2012; pp. 229–245. [Google Scholar]
  60. Maggi, F.M.; Marrella, A.; Capezzuto, G.; Cervantes, A.A. Explaining Non-compliance of Business Process Models Through Automated Planning. In Service-Oriented Computing; Pahl, C., Vukovic, M., Yin, J., Yu, Q., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 181–197. [Google Scholar]
  61. Anzbock, R.; Dustdar, S. Semi-automatic generation of Web services and BPEL processes—A Model-Driven approach. In Proceedings of the 3rd International Conference on Business Process Management, Nancy, France, 5–8 September 2005; pp. 64–79. [Google Scholar]
  62. Poulymenopoulou, M.; Malamateniou, F.; Vassilacopoulos, G. Emergency healthcare process automation using workflow technology and web services. Med. Inform. Internet Med. 2003, 28, 195–207. [Google Scholar] [CrossRef] [PubMed]
  63. Poulymenopoulou, M.; Malamateniou, F.; Vassilacopoulos, G. Emergency healthcare process automation using mobile computing and cloud services. J. Med. Syst. 2012, 36, 3233–3241. [Google Scholar] [CrossRef] [PubMed]
  64. Leonardi, G.; Panzarasa, S.; Quaglini, S.; Stefanelli, M.; van der Aalst, W.M.P. Interacting Agents through a Web-based Health Serviceflow Management System. J. Biomed. Inform. 2007, 40, 486–499. [Google Scholar] [CrossRef] [PubMed]
  65. Ma, X.; Lu, S.; Yang, K. Service-Oriented Architecture for SPDFLOW: A Healthcare Workflow System for Sterile Processing Departments. In Proceedings of the IEEE Ninth International Conference on Services Computing (SCC), Honolulu, HI, USA, 24–29 June 2012; pp. 507–514. [Google Scholar]
  66. Capata, A.; Marella, A.; Russo, R. A geo-based application for the management of mobile actors during crisis situations. In Proceedings of the 5th International ISCRAM Conference, Washington, DC, USA, 4–7 May 2008. [Google Scholar]
  67. Humayoun, S.R.; Catarci, T.; de Leoni, M.; Marrella, A.; Mecella, M.; Bortenschlager, M.; Steinmann, R. The WORKPAD User Interface and Methodology: Developing Smart and Effective Mobile Applications for Emergency Operators. In Universal Access in Human–Computer Interaction. Applications and Services; Springer: Berlin/Heidelberg, Germany, 2009; pp. 343–352. [Google Scholar]
  68. Marrella, A.; Mecella, M.; Russo, A. Collaboration on-the-field: Suggestions and beyond. In Proceedings of the 8th International Conference on Information Systems for Crisis Response and Management (ISCRAM), Lisbon, Portugal, 8–11 May 2011. [Google Scholar]
  69. López-Cózar, R.; Callejas, Z. Multimodal dialogue for ambient intelligence and smart environments. In Handbook of Ambient Intelligence and Smart Environments; Springer Science & Business Media: Berlin, Germany, 2010; pp. 559–579. [Google Scholar]
  70. Bongartz, S.; Jin, Y.; Paternò, F.; Rett, J.; Santoro, C.; Spano, L.D. Adaptive user interfaces for smart environments with the support of model-based languages. In Proceedings of the International Joint Conference on Ambient Intelligence, Pisa, Italy, 13–15 November 2012; pp. 33–48. [Google Scholar]
  71. Jaber, R.N.; AlTarawneh, R.; Humayoun, S.R. Characterizing Pairs Collaboration in a Mobile-equipped Shared-Wall Display Supported Collaborative Setup. arXiv 2019, arXiv:1904.13364. [Google Scholar]
  72. Humayoun, S.R.; Sharf, M.; AlTarawneh, R.; Ebert, A.; Catarci, T. ViZCom: Viewing, Zooming and Commenting Through Mobile Devices. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces (ITS ’15), Madeira, Portugal, 15–18 November 2015; ACM: New York, NY, USA, 2015; pp. 331–336. [Google Scholar] [CrossRef]
  73. Yang, J.; Yang, W.; Denecke, M.; Waibel, A. Smart sight: A tourist assistant system. In Proceedings of the Third International Symposium on Wearable Computers, San Francisco, CA, USA, 18–19 October 1999; p. 73. [Google Scholar]
  74. Collerton, T.; Marrella, A.; Mecella, M.; Catarci, T. Route Recommendations to Business Travelers Exploiting Crowd-Sourced Data. In Proceedings of the International Conference on Mobile Web and Information Systems, Prague, Czech Republic, 21–23 August 2017; pp. 3–17. [Google Scholar]
  75. Flood, D.; Germanakos, P.; Harrison, R.; McCaffery, F.; Samaras, G. Estimating Cognitive Overload in Mobile Applications for Decision Support within the Medical Domain. In Proceedings of the 14th International Conference on Enterprise Information Systems (ICEIS 2012), Wroclaw, Poland, 28 June–1 July 2012; Volume 3, pp. 103–107. [Google Scholar]
  76. Jourde, F.; Laurillau, Y.; Moran, A.; Nigay, L. Towards Specifying Multimodal Collaborative User Interfaces: A Comparison of Collaboration Notations. In International Workshop on Design, Specification, and Verification of Interactive Systems; Springer: Berlin/Heidelberg, Germany, 2008; pp. 281–286. [Google Scholar]
  77. McGee-Lennon, M.R.; Carberry, M.; Gray, P.D. HECTOR: A PDA Based Clinical Handover System; DCS Technical Report Series; Technical Report; Department of Computing Science, University of Glasgow: Glasgow, UK, 2007; pp. 1–14. [Google Scholar]
  78. Iyengar, M.; Carruth, T.; Florez-Arango, J.; Dunn, K. Informatics-based medical procedure assistance during space missions. Hippokratia 2008, 12, 23. [Google Scholar] [PubMed]
  79. Iyengar, M.S.; Florez-Arango, J.F.; Garcia, C.A. GuideView: A system for developing structured, multimodal, multi-platform persuasive applications. In Proceedings of the 4th International Conference on Persuasive Technology, Claremont, CA, USA, 26–29 April 2009. [Google Scholar]
  80. Marx, M.; Carter, J.; Phillips, M.; Holthouse, M.; Seabury, S.; Elizondo-Cecenas, J.; Phaneuf, B. System and Method for Developing Interactive Speech Applications. U.S. Patent US6173266B1, 9 Janurary 2001. [Google Scholar]
  81. Hedin, J.; Meier, B. Voice Control of a User Interface to Service Applications. U.S. Patent US6185535B1, 6 February 2001. [Google Scholar]
  82. Sharma, R.; Yeasin, M.; Krahnstoever, N.; Rauschert, I.; Cai, G.; Brewer, I.; MacEachren, A.M.; Sengupta, K. Speech-gesture driven multimodal interfaces for crisis management. Proc. IEEE 2003, 91, 1327–1354. [Google Scholar] [CrossRef]
  83. Potamianos, G. Audio-visual automatic speech recognition and related bimodal speech technologies: A review of the state-of-the-art and open problems. In Proceedings of the 2009 IEEE Workshop on Automatic Speech Recognition Understanding, Merano, Italy, 13 November–17 December 2009; p. 22. [Google Scholar] [CrossRef]
  84. Hansen, T.; Eklund, J.; Sprinkle, J.; Bajcsy, R.; Sastry, S. Using smart sensors and a camera phone to detect and verify the fall of elderly persons. In Proceedings of the 3rd European Medicine, Biology and Engineering Conference, Prague, Czech Republic, 20–25 November 2005. [Google Scholar]
  85. Lisetti, C.; Nasoz, F.; LeRouge, C.; Ozyer, O.; Alvarez, K. Developing multimodal intelligent affective interfaces for tele-home health care. Int. J. Hum. Stud. 2003, 59, 245–255. [Google Scholar] [CrossRef]
  86. Cohn, J.F.; Kruez, T.S.; Matthews, I.; Yang, Y.; Nguyen, M.H.; Padilla, M.T.; Zhou, F.; la Torre, F.D. Detecting depression from facial actions and vocal prosody. In Proceedings of the 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, The Netherlands, 10–12 September 2009; pp. 1–7. [Google Scholar]
  87. Mumolo, E.; Nolich, M.; Vercelli, G. Pro-active service robots in a health care framework: Vocal interaction using natural language and prosody. In Proceedings of the 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591), Paris, France, 18–21 September 2001; pp. 606–611. [Google Scholar] [CrossRef]
  88. Cohen, P.R.; Johnston, M.; McGee, D.; Oviatt, S.; Pittman, J.; Smith, I.; Chen, L.; Clow, J. QuickSet: Multimodal Interaction for Distributed Applications. In Proceedings of the Fifth ACM International Conference on Multimedia (MULTIMEDIA ’97), Seattle, WA, USA, 9–13 November 1997; ACM: New York, NY, USA, 1997; pp. 31–40. [Google Scholar] [CrossRef]
  89. Fleury, A.; Vacher, M.; Noury, N. SVM-Based Multimodal Classification of Activities of Daily Living in Health Smart Homes: Sensors, Algorithms, and First, Experimental Results. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 274–283. [Google Scholar] [CrossRef] [PubMed]
  90. Billinghurst, M.; Savage, J.; Oppenheimer, P.; Edmond, C. The expert surgical assistant. An intelligent virtual environment with multimodal input. Stud. Health Technol. Inform. 1996, 29, 590–607. [Google Scholar] [PubMed]
Figure 1. Classifying healthcare processes in six clinical macro steps.
Figure 1. Classifying healthcare processes in six clinical macro steps.
Computers 08 00067 g001
Figure 2. Transforming CGs into patient-specific care pathways.
Figure 2. Transforming CGs into patient-specific care pathways.
Computers 08 00067 g002
Figure 3. The list of clinical characteristics to calculate the chest pain score.
Figure 3. The list of clinical characteristics to calculate the chest pain score.
Computers 08 00067 g003
Figure 4. The multimodal GUI adopted by doctors.
Figure 4. The multimodal GUI adopted by doctors.
Computers 08 00067 g004
Figure 5. A care pathway for chest pain represented as a BPMN process.
Figure 5. A care pathway for chest pain represented as a BPMN process.
Computers 08 00067 g005
Figure 6. The GUI deployed on the mobile devices adopted by the medical staff.
Figure 6. The GUI deployed on the mobile devices adopted by the medical staff.
Computers 08 00067 g006
Figure 7. TESTMED system architecture.
Figure 7. TESTMED system architecture.
Computers 08 00067 g007
Figure 8. A doctor using the TESTMED system in a ward during the visit of a patient simulator.
Figure 8. A doctor using the TESTMED system in a ward during the visit of a patient simulator.
Computers 08 00067 g008
Figure 9. The vocal/touch user interface responsiveness tests.
Figure 9. The vocal/touch user interface responsiveness tests.
Computers 08 00067 g009
Figure 10. Comparison between the ratings obtained in the two user studies.
Figure 10. Comparison between the ratings obtained in the two user studies.
Computers 08 00067 g010
Figure 11. Results of a 2-sample t-test applied over statement Q4.
Figure 11. Results of a 2-sample t-test applied over statement Q4.
Computers 08 00067 g011
Figure 12. A benchmark to evaluate the usability of a GUI.
Figure 12. A benchmark to evaluate the usability of a GUI.
Computers 08 00067 g012
Table 1. Results of the first user study.
Table 1. Results of the first user study.
Q1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11
User143432344433
User243424223233
User353435254544
User444433444334
User533423444434
User634533554444
User734434454444
Avg3.73.434.142.73.433.434.143.863.713.433.71
Table 2. Results of the second user study
Table 2. Results of the second user study
Q1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11
User144445344444
User244535234243
User333344344334
User454324554454
User515555555555
User635454544455
User734554454544
Avg3.24.164.1444.33.864.294.143.864.294.14

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop